Credibility is key to scientific communication. It’s the flipside of trust. Journals have to trust authors, professionals have to trust journals, and the public has to be able to trust popular translations of information out of journals in addition to trusting that the entire process is solid.
When trust isn’t rewarded even infrequently, suspicions creep in, and the credibility of scientific communication is put at risk.
A recent study hit on all these areas of trust and credibility. The study, published in the journal Pathophysiology (which oddly doesn’t indicate it’s a peer-reviewed journal, despite being an Elsevier title), postulates that an observed 10% greater prevalence for left-sided cancers (left breast, left testicle, left side of the body in general) is potentially caused by box spring mattresses, which the authors believe act as antennae (because certain dimensions are somehow mathematically related to the wavelengths of radio signals), amplifying ambient radio and EMF waves into people as they sleep on their left sides, a common position, thereby increasing the risk of developing cancer.
How’s that for a premise?
The authors compared Western populations with Asian populations (who sleep on futons), and found that the Asian populace didn’t have a similar “handedness” to their cancers.
It’s an arresting hypothesis, but given the number of times the cancer-causing effects of EMF waves have been debunked, my skepticism immediately kicked in. The story gets even more interesting, as the scientist who wrote the paper, Olle Johansson, was named “Misleader of the Year” in 2004 by Swedish Sceptics [sic], a non-profit watchdog group that “tries to inform the public and the media about the nature of science, explaining which questions are scientifically meaningful and which are not.”
Yet, the study was published in an Elsevier journal and amplified by a blog affiliated with Scientific American — a “guest blog” with “Commentary invited by the Editors of Scientific American.” The author of the blog post, R. Douglas Fields, PhD, is described thusly:
Chief of the Nervous System Development and Plasticity Section at the National Institute of Child Health and Human Development and Adjunct Professor at the University of Maryland, College Park. Fields, who conducted postdoctoral research at Stanford University, Yale University, and the NIH, is Editor-in-Chief of the journal Neuron Glia Biology and member of the editorial board of several other journals in the field of neuroscience. He is the author of the new book The Other Brain (Simon and Schuster), about cells in the brain (glia) that do not communicate using electricity. His hobbies include building guitars, mountain climbing, and scuba diving. He lives in Silver Spring, Md.
Scientific American adds this to the blog:
The views expressed are those of the author and are not necessarily those of Scientific American.
Blogs have become a problematic area of branding for existing publishers, and for the life of me, I can’t see what the problem is. Is it because there’s “technology” involved? Is it because there’s a stigma to blogs that somehow clogs thinking? Problems with the branding or sub-branding or non-branding of blogs is becoming a theme.
Readers don’t seem to have any difficulty comprehending the branding or credibility issues. Comments on the original Scientific American blog post call out the problems with the study and Fields’ coverage in no uncertain terms. They started within minutes of the post going up:
Except, of course, that radio waves do not have enough energy to do anything with our cells.
This is absurd and not worthy of SciAm, although I must say the online version is not up to the print standard anyway.
Do you even know who this Johansson is, Douglas? Please do a Google search before you write anything next time.
This is ridiculous. How am I supposed to respect Scientific American when they post junk science like this? Radio waves are non-ionizing radiation and DO NOT CAUSE CANCER. You could teach a child the correlation between energy and cancer risk by showing them the electromagnetic spectrum and pointing out where UV, X-Ray, and Gamma lie in comparison to visible light and radio waves.
And so on. In fact, one commenter got to the point that occurred to me immediately:
The simple answer is that in most countries, drivers sit on the left getting exposed to sunlight and UV. Case closed.
Given the fact that people in Japan drive on the opposite side of the road from Americans, this makes sense. Also, genetic differences, reporting rates, medical systems, and other variables also have to be taken into consideration.
Huge branding issues are tangled up in this mess. Pathophysiology, as noted before, doesn’t list peer-review as part of its process, despite this being standard for most Elsevier journals. It also doesn’t list that it’s indexed in PubMed, even though it is. Is it peer-reviewed or not? Basic branding elements (it’s a journal, it’s published by Elsevier) suggest it’s peer-reviewed, but that might be an assumption. Can a publisher’s brand create this assumption?
For Scientific American, the branding of its blog, the branding of invited bloggers, and the vulnerabilities these bloggers and blogs might create for the brand should be leading its owners to manage it better. Branding distinctions aren’t clear, quality is not being maintained, and laughable items are getting through.
But the biggest issue is credibility. To me, this hits squarely on a problem I’ve worried about before — we’re publishing too much, thereby becoming a cause of filter failure rather than a solution. For people outside journals, the statement “a study found” or “a recent study” is more likely to induce eye-rolling than riveted interest. Being upstream from the mainstream media, we’re injecting unfiltered water, an anemic popular media isn’t filtering it further (trusting our process perhaps too much), and the public is more often finding the resulting sludge distasteful.
The entire scientific publishing genre is losing credibility with the public, putting the article, the journal, and the peer-review process at risk.
As one smart person covering the bedspring study and resulting skepticism noted, “Who’s right? Who knows.”
Is that really a sufficient result for published research?
Discussion
22 Thoughts on "Left-handed Cancer, Box Springs, Scientific American, Branding, and Credibility"
This is a problem I have worked on. The deep issue here is how should scientific communication handle scientific controversy? The great weakness of the literature has been that one does not publish controversy, even though it is the hallmark of the frontier.
Go to a conference and listen to a paper, then to the arguments in Q&A. Only the paper gets published. My colleagues and I have even contemplated recording and publishing what we call the “fights,” because these are where the scientific action really is.
Because the controversies are so well hidden in the literature, the public has the mistaken belief that science is simply the steady accumulation of facts. It is also the progressive erection and destruction of systems of thought.
A feature of the fundamentally disputational nature of science is that contrary opinions seldom go away, and never slowly. This is healthy because discredited systems have a way of rebounding. The history of wave and particle theories of light are a classic case.
But sometimes the alternate view becomes irrational, though still held by some, as in the flat earth. The EMF-cancer link may be near this region, or it may not; it is a good scientific question.
But the bottom line to me is that blogging is here working the same magic it is working everywhere, namely making previously hidden controversy very visible. This in turn will need to bring on a rather different popular view of science, as something human, as it were. I even think controversy should be taught as part of K-12 science education, but it presently is not. Revolutions are like that, the textbooks are the last stage.
In short it is because of the blog that I have no problem with the article being published. Science is about fights, as much as facts. How do we best communicate that?
A very valuable comment. I think the posture many journals take — as “cleared” science that you can believe in — obscures basic issues like those you’re describing. The authority of science is translated into the public sphere in the garb of journals and studies, but these are losing credibility without the discussion being replaced by “science is tough work, truth isn’t easily discerned, and debates will rage.”
I think a key, unspoken debate of our times is — Will science communication devolve into paper presentations and debates? Or will some form of clear differentiation emerge separating “publication of quality” vs. “publication for provocation.” I don’t disagree that science needs provocative publications, but when they’re dressed up as sources of authority, it gets confusing to everyone. And confusion that isn’t necessary isn’t good.
Exactly, but the unspoken debate has to become spoken (and here we are!). Newspapers and manufacturers are going through the same crisis, thanks to real time commenting.
We have added commenting to our flagship product http://www.osti.gov/bridge/, so every DOE research report can be publicly challenged.
What we need are new concepts and descriptive terms to make the distinctions you describe. Concept confusions like this are essential to technological revolutions, because we have new things to talk about.
And the public can handle these new concepts of credibility, so censorship is not the answer. Welcome to the age of controversy.
The whole thing smells like opportunity to me. I wonder who will step up?
I think you’re right. If the filter is moving, who will take control of it?
How is the “filter moving”? Stuff like this is published all the time. What is changing is the impact of social media, which certainly brings opportunities.
And what kind of “control” are you referring to? Commercial or legislative?
Did you just have your coffee?
The filter seems to be moving to me because social media and other leveling effects have broken down the scarcity barrier keeping speculative science from the public, and the filters we’re trying (blogs, etc.) aren’t being well-managed by incumbent brands (SciAm, in this case). Yet other outlets and people at large are doing a better job of filtering. Hence, the filter is moving. If someone could find a way to capture the social media filter in a better way, they may have something.
The kind of control I’m thinking about is self-control, brand control, and professional control. We need a little more of all that in existing brands seeking new horizons.
There never was a scarcity barrier. This speculative stuff used to simply get reported, with no commenting. Commenting is the new filter. So I see more control, not less, a huge amount more.
I do not understand your “branding” metaphor, so can’t comment on that. Do you mean reputation? Why do we need a new word?
There most definitely was a scarcity barrier — the expense and expertise involved in publication in the past made it so that only a fraction of the populace published. Now, everyone can. Result? Lack of scarcity. This is why there has been a huge proliferation of published materials in the sciences in the past decade. However, there has not been the same huge improvement in the filtering.
“Branding” was not used here as a metaphor, but to describe the fact that OSTI or NIH or DHHS or NEJM or JAMA or JBJS or JBC each has a brand it has to uphold. Extending while failing to control the brand experience is risky, and can lead to SciAm blogs that make readers question the quality of the overall effort.
This is reminiscent of Elsevier’s Medical Hypotheses, which after a similar controversy, was forced to abandon its editorial-only review process to become a standard peer-review journal.
I still believe there is a role for non-peer-reviewed journals in science, but the main issue that I see in your post is whether Scientific American can take responsibility for amplifying research like this to the lay public without being accountable in their role. I don’t see how printing a disclaimer somehow absolves SA from accountability.
Accountability for what? Do you think speculative research with public health implications should not be blogged about? Or should not be published at all? The blog comments do a fine job of pointing out the problems with the speculation.
I seem to be the only one who does not see a problem here.
Blogs translating to the public should choose more carefully. I socialize with a lot of people outside of science, and they are really growing weary of junk science, laughable science, and stupid science making its way into the lay press. When a “frontier brand” like SciAm participates, and a journal isn’t even clear if it’s peer-reviewed (but can still be called a “journal” for the sake of coverage), and SciAm does a lousy job of covering the study, so much so that social media nails the blogger immediately as lacking, I think we have a bit of a breakdown in many places — journal publishers/editors, blogger, brand managers, and professionalism overall.
That study shouldn’t have had a non-peer-reviewed outlet that could be confused with a peer-reviewed outlet, shouldn’t have been amplified by a trusted brand, etc. You really don’t see a problem here?
I don’t have the problem with the research that you do. Not that I believe the EMF scare, and I come from electric power where the big lines will light up a fluorescent tube in your hands. The hypothesis fits the facts so the speculation is legitimate. All of string theory is this tenuous.
The public has a right to know about this theory. You are calling for a form of censorship that I would not approve of.
By the way, the difference between peer review and editorial review is just that between three and one readers. There is no magic here.
The issue with relying on blog commenters to correct flagrant inaccuracies is that 1) many, if not most people don’t read the comments on most blogs, and 2) many will read the article before any comments are added and will never return to the article.
Also, many blogs are parts of self-reinforcing communities (Right wing or left wing political blogs, or the anti-religion science blogs as examples). On those blogs, the comments serve to reinforce the message of the blog and dissension is ruthlessly weeded out.
I’d rather a system that avoids terrible mistakes than one that constantly spreads misinformation but makes some halfway efforts at disputing that information after the fact.
Some worrisome news about online trust can be found here: So-Called “Digital Natives” Not Media Savvy, New Study Shows
During the study, one of the researchers asked a study participant, “What is this website?” The student answered, “Oh, I don’t know. The first thing that came up.”
That exchange sums up the overall results from this study: many students trusted in rankings above all else. In fact, a quarter of the students, when assigned information-seeking tasks, said they chose a website because – and only because – it was the first search result.
I thought the problem was lack of trust, but now it is too much trust. Maybe you folks should get your story straight.
I too use Google first and typically find that the first hit is sufficient. This is a deep fact about language that needs to be explained. And I don’t check credentials unless I am in a political realm. On the other hand, unlike these students I love Wikipedia.
So maybe the standards being applied in judging this study’s results are unrealistic. There is a lot of that going around.
The story is about who to trust, and how brands and even genres (e.g., journals) can earn trust then falter. SciAm is a trusted brand, but it stumbled here. Elsevier has stumbled plenty recently (fake journals in Australia, other incidents). The public wants to trust science, but the filter we present isn’t working — it’s OK most of the time for people who work in science and who understand that some studies are garbage/speculative/preliminary. It’s not working when those get amplified for prurient interest or headline-mongering by brands we want to trust. And there are plenty of examples when Google shouldn’t be trusted on key topics.
I don’t agree that SciAm stumbled here (by which I assume you mean made a mistake). Thanks to blogs the public is learning that science includes garbage, speculation and controversy, not to mention fads and political movements. (Now we have to figure out how to teach this.)
Call it the New Enlightenment. Thanks to social media your filter is obsolete.
Well, there’s no one story to get straight. Kent and I often have very different takes on the same issue. But I think it’s relevant here because Kent’s post was about the concept of trust and brands and how one goes about deciding what’s believable online.
As a discerning individual, I’m assuming you don’t immediately place complete trust in a source solely because it has a high Google ranking, correct? Is it a problem if students do this without reading the article or even looking to see what the actual source is? Has the automated algorithm used by Google replaced the need for expert authority?