A recent Pew Research Center survey on American’s thoughts on genetically modified (GM) foods provides a lot of information about how people think of GM foods as well as organic foods and healthy eating habits in general. Today, I want to focus on the parts of the survey that deal with what Americans think about the science behind GM foods. This survey closely mirrors a survey they did earlier in the year about climate change.
Peeking into the survey, there were a few stand-out points about trust in science.
- 35% of Americans think that scientists’ don’t really understand the health effects of GM foods.
- 53% of Americans believe about “half or fewer” of scientists believe GM foods are safe to eat.
- 21% of Americans say they do not trust scientists to provide “full and accurate” information on the health effects of GM foods.
- 81% said they believe scientists’ research on GM foods are based on best available evidence most or some of the time.
- 80% said they believe scientists’ research is based on scientists desire to “help their industries.”
To try to make sense of this and review how the dissemination of research plays into these public perceptions, I had a conversation with Jamie L. Vernon, Ph.D., Director of Science Communications and Publications, Sigma Xi, and Editor-in-Chief, American Scientist. He recently launched the Research Communications Initiative to provide services to researchers who wish to directly communicate their work to the public. Before working at Sigma Xi, Jamie was a AAAS Science and Technology Policy Fellow and an ORISE Fellow at the U.S. Department of Energy (DOE), where he developed strategies to measure and communicate the economic impacts of the department’s investments in clean energy technologies. He has a B.S. in Zoology from North Carolina State University, a M.S. in Biotechnology from East Carolina University, and a Ph.D. in cell and molecular biology from The University of Texas at Austin.
Angela: Jamie, my attention was drawn to this survey because NPR did a story about it and the headline was “Americans Don’t Trust Scientists’ Take On Food Issues.” Ouch. Do any of the statistics above surprise you as a science communicator?
Jamie: These days nothing surprises me when it comes to public opinion. Having said that, my tendency to seek the silver lining leads me to find good news in that 21% number. This statistic is consistent with public polls that show the public generally “trusts” scientists. I suppose the result that I find most troubling is the 80% who believe scientists are trying to help “their industries.” This goes to the heart of the problem of public trust in science.
When the public conflates science with the advancement of industrial agendas, we risk losing the perception of objectivity that makes science our most effective institution for solving societal problems. Equating scientific support for the development of gene modification techniques with approval of certain corporate business models, for example, has led to tremendous pushback against agricultural engineering research. Past mistakes by scientists and science communicators on this issue have contributed to the perception that the scientific community is overly focused on “industries.” Scientists should avoid crossing the line between explaining the benefits of genetic modification and defending certain corporations’ use of the science to create and sell products.
We should explain more clearly that scientific progress is agnostic to its application. However, the public needs to know that advancement of science is an economic engine, driving innovation that leads to technologies we rely on everyday.
Angela: “Help their industries” was a weird phrase to me, but yes, that is a concern. The uncertainty around government funding in the not-so-far off future (in the wake of the US election results and the Brexit vote in the UK) may lead to research funding deficits that are met in part by industry. The other item that struck me was that 35% of Americans think that scientists don’t really know whether GM foods are safe and many believe that scientists are not in agreement on GM foods. This is similar to what we are hearing now with climate change. But this is really not true. I wonder how much damage is done when mainstream news organizations cover food and climate related stories that seem to overturn each other on a weekly basis. Eat more chocolate. No wait, eat less. Red wine is harmful versus my preferred study that says red wine is better than exercise. Are we (the science communication community) pushing out too much information on studies that have limited significance? Does this lead to the public feeling like scientists don’t know what they are talking about?
Jamie: Right. Well, I’m not surprised to learn that the respondents think scientists neither know nor agree on whether GM foods are safe, despite recent polling data confirming that 88% of scientists believe genetically modified foods are safe. As you point out, the issue of GM food safety suffers from the same media failures as climate change, namely the false balance problem. The media’s inability to cover these complex issues in a way that conveys the statistical imbalance of opposition versus support has thwarted our ability to mount a meaningful response to climate change. Likewise, people who doubt broad scientific consensus on the safety of GM foods have slowed the deployment of new agricultural solutions. This delay impedes mitigation of environmental damage caused by large-scale farming and slows the distribution of drought- and disease-resistant crops to areas of the world that most need them.
Your second point is also less of an issue with science and more of a problem with media coverage of science. That is, the mainstream media too often reports new scientific results as if they invalidate all prior knowledge. A food study that suggests that chocolate is good or bad for you is only as good as the design of that particular research. The media seems to invest very little in communicating the relative significance or lack thereof for any individual study. For this reason, contemporary mainstream science reporting tends to legitimize and, in some cases, encourage poor nutritional decisions.
The scientific community reacts differently to newly published research data. Scientists who assume any individual result is a bulletproof fact run the risk of designing weak experiments in the future. So, they attribute a degree of uncertainty to the data and design future experiments to test the validity of the new result. The public doesn’t have the luxury of testing the result, so they rely on the media to accurately disseminate actionable information. For this reason, the media should be more disciplined about how they share new research data.
I find it personally insulting when reading a story based on a single study that suggests I need to change my lifestyle. On the other hand, on issues such as GM food safety, we’re talking about decades of research conducted by hundreds of scientists and supported by thousands of scientific papers all pointing to the same conclusion: GM foods present virtually no threat to public health. There’s reason to make confident life-choices based on this information. Sadly, the mainstream media outlets seem to lack the capacity to distinguish between these two types of scientific reporting. I believe this imprecision contributes to public doubt in science by undermining well-founded scientific conclusions.
Angela: Talking about the dissemination of research, do you think the public cares about whether something is peer reviewed by a journal? Are they okay with accepting science shared as a preprint? OR, are they completely ignorant of review? I have been concerned about over-blown criticisms about the peer review process and what effect these criticisms may have on the public’s trust of science output. While I see an important role in evaluating the long held processes and support innovation and experimentation in different formats for peer review, it seems to me that headlines shouting about peer review being in crisis or science (as a system) being broken is not really helping the cause.
Jamie: I think there’s a broad range of public understanding when it comes to scientific publishing. I suspect that few non-scientists can distinguish between a “preprint” and a formal publication. This calls for clear designation of the type of information being shared with the public.
Some folks are familiar with impact factors and peer review, though. Individuals dealing with a health crisis, for example, will engross themselves in the literature, pitting one treatment against another. I’ve heard stories of patients educating their doctors about new research on their illnesses. Of course, these people rely on traditional indicators of reliability and may be not aware of weaknesses in the system that warrant caution.
The proliferation of specialized scientific publications, as you know, raises concerns about reviewer fatigue, shallow pools of expertise in certain areas, and the internal politics of science, including racial and gender biases. These are legitimate challenges, but it’s not clear to me whether they amount to a peer review crisis.
Advocates of open peer review might see errors in the literature as justification for revamping the entire system. In my opinion, peer review is fairly effective at detecting shoddy results based on poor experimental design, and it is less successful catching “wrong” results based on good experimental design. The latter of which should be tolerated by a healthy scientific enterprise.
So, when I hear that there’s a peer review crisis, I wonder what we, the science communication community, expect from publishers. If we expect scientific purity, meaning zero tolerance for “wrong” results, then I know we’ll never be able to design a system that meets that goal. Rather than setting unreasonable expectations and calling it a crisis each time there’s a glitch, it’s incumbent upon us to empower the public with the proper degree of skepticism through our reporting.
Angela: It does seem like the pendulum is swinging wildly from “do more peer review” to “throw it out and just post stuff online.” The intent of peer review is to use experts in a field to evaluate research output. Authors scamming the system (of which there are relatively few) can be difficult to catch. One thing that I find non-researchers confused about is that scientific literature is largely self-correcting. Researchers know that. Jane Q. Public does not. Going forward, I do think there are some steps that could be taken to improve trust and communication.
One step forward would be more transparency in the entire process. My vision for output in the future is that everything will be available, discoverable, and linked. Grant applications, study design, perhaps a level of notes during the study, conference presentations, abstracts, data, figures, preprints, peer review reports of journal submissions, published journal articles, and post-publication discussion could be available and linked to each other to show the progression of the research.
Preprints in biomedicine are presenting a new challenge for reporting. I see the appeal for scientists to get their work out in the open quickly but if the mainstream press wants to cover that study, it should be clear that the work was not peer reviewed. It would also be immensely helpful if authors returned to any preprints posted to include links to the final papers. Yes, I know that this means going back a while and remembering to do it, but this may be one of the responsibilities you take one when you post a preprint.
What do you think Jamie? What ideas do you have to make science communication work better going forward?
Jamie: I’m not sure any of the talk about preprints and transparency has much influence on public opinion. Generally, I think the average American views “science” as a monolithic institution. They just expect scientists to do their jobs well and ethically.
Unless there are individual circumstances that warrant further digging on a specific issue, I believe the public assumes the messages they hear from the media accurately reflect the scientists’ intent. In other words, when the media reports on new results, the general public assumes the report is backed by the scientists.
This means scientists have a responsibility to ensure that reporters get the story right. I’ve advised scientists who speak to reporters to answer their questions, then ask the reporter to repeat back what they heard, and make corrections as needed. That’s one way to improve science reporting.
The problem of trust is much bigger. Survey data suggests that U.S. public trust has generally remained stable from 1974 to 2010, except for respondents identifying as conservative. This is troubling, but not surprising. We touch on the possible reasons why conservatives, in particular, have lost confidence in science in the January-February 2017 issue of American Scientist.
In our inaugural Science Communication column, Matt Nisbet identifies gaps in the current communication landscape that may be contributing to the erosion of trust among certain groups. He argues that scientific advancement is perceived to largely benefit an elite segment of the population, while those suffering from decades of economic hardship feel excluded.
Nisbet suggests that financial investments in local media outlets will help by facilitating in-depth discussions of complex scientific issues that are oversimplified on the major news networks and by supporting conversations about the contributions science has made to the lives of average Americans. He also encourages scientists to address issues of inequality by calling for affordable education beyond STEM fields.
I think Nisbet is on to something. Science does have an elite element to it. I get that impression every time I visit my hometown in rural North Carolina. Science is just not what people talk about there. And when they do, it’s usually in the context of some “change” in the way we do things.
Rural America, conservative America, isn’t that comfortable with change. As you know, Donald Trump courted rural voters’ resistance to change by promising to return America back to a period of “greatness.” Presumably, he was referring to less complicated times. A recent study revealed that even messages about climate change harkening back to the way things used to be proved to be more persuasive with conservatives than future-looking narratives. So, when people who reside in these communities hear about artificial intelligence, robotics, genetically modified crops, and stem cell research, it’s understandable that they begin to question whether scientists are helping them or serving industry elites.
For years, I’ve encouraged scientists to step out of the lab to talk to people about the positive effects science has on society. More importantly, it’s worth it to let people know that scientists live in their communities and prosper and suffer just like everyone else.
Angela: We have certainly been talking a lot about “bubbles” lately. I guess that’s why I wanted to chat about these issues around science communication and how our “bubbles” provide a weird sort of distorting visual of what we hear about. On the publisher side of things, we are observers of the push and pull. Who are journal articles written for? Who should have access to articles? How should the dissemination of information be improved and how do we pay for it? It’s not just publishers debating these questions. Researchers seem not to have come to a consensus either.
Thanks for coming into the Kitchen with me. Readers, what are your thoughts on how scholarly communication can break through the bubbles? How can the scholarly publishing ecosystem improve scholarly communication?
Discussion
22 Thoughts on "Communicating Science: What Can We Do Better?"
Angela, to your concerns about preprints:
On bioRxiv, the largest source of preprints in biomedicine, every posted manuscript has this between the title and the abstract: “This article is a preprint and has not been peer-reviewed [what does this mean?].” Obviously, a journalist or her editor may choose not to pass along that information about their source but many do. The manuscript http://biorxiv.org/content/early/2016/06/23/055699 about the (insignificant) effects of cell phone radiation on rats got wide press coverage (attention score #21 in Altmetric’s 2016 top 100) and much of it mentioned the article’s preprint status.
Links between a preprint and its published version are indeed valuable. 60% of the manuscripts on bioRxiv are published in journals. The links are created automatically via matches between preprints and papers indexed by PubMed and CrossRef, so that authors don’t have to do it themselves.
What seems to be missing from this discussion is that these are basically adversarial policy debates that happen to be science intensive. Both (or all) sides are paying heavily to get their word out and they have scientists supporting them. Each side has its studies. It is hard to see these policy issues as a communication problem, when communication is fueling the debate.
I briefly touched on this by mentioning the false balance problem. Yes, there are debates and both sides have scientists who back their arguments, but often the media presents the debate as though on opposition scientist’s opinion is equivalent to thousands of scientists backed by volumes of data. The communication problem is in the way the media covers the issue.
In the cases at issue the ratios you seem to claim do not exist. For example, Germany is not considering banning GMOs because of an opposition scientist. See http://www.reuters.com/article/us-germany-gmo-idUSKBN12X13S
These are very real debates, not communication problems. Blaming the press for giving too much time to their opponents is a standard argument on all sides. It is a symmetrically false argument.
How should the dissemination of information be improved and how do we pay for it?
Personally, I think requiring authors to write a summary of their paper directed at a broader audience is really helpful. It assumes some basic understanding of science, however, and is not the same as mainstream journalism. But it does allow journalists with a basic training in science to understand the point of the paper in less arcane, subject-specific language.
For example, PNAS requires authors to include a significance statement with their submission:
Significance Statement. Authors must submit a 120-word-maximum statement about the significance of their research paper written at a level understandable to an undergraduate-educated scientist outside their field of specialty. The primary goal of the Significance Statement is to explain the relevance of the work in broad context to a broad readership. The Significance Statement appears in the paper itself and is required for all research papers.
On the language point, this great handout from the American Geophysical Union was just shared with me and although sort of obvious on a word-by-word level, I like the overall point that this is making about scientists and public being two communities separated by a common language (thanks, Churchill): https://sharingscience.agu.org/files/2015/09/Watch-Your-Words-Handout.pdf
Every effort helps…Science in Society is a Northwestern University research center dedicated to science education and public engagement…One of the hallmarks of Science in Society’s approach to mission-driven community engagement is committing to the long-term. Rome wasn’t built in a day…Science in Society-Northwestern University
“How can the scholarly publishing ecosystem improve scholarly communication?” Well we can begin by ensuring adherence to the Committee on Publication Ethics (COPE) guidelines
(http://publicationethics.org/). As I have noted before in Scholarly Kitchen, the journal that Jamie Vernon edits, American Scientist, is not a COPE signatory. But then, since we are now firmly in the post-truth era, perhaps that doesn’t matter any more?
The Jan-Feb issue of American Scientist, online today, has an article by Jamie Vernon on “Science in the Post-Truth Era.” Here My COPE comment, similar to that above, was not approved by the comment editor.
It seems to me that we need reporters who are trained in science and who are able to provide simple explanations for complex findings. In short we need clear thinkers like Einstein who said when asked to explain relativity: “Put your hand on a hot stove for a minute, and it seems like an hour. Sit with a pretty girl for an hour, and it seems like a minute. That’s relativity.”
― Albert Einstein
Perhaps we need some organizations that do fact-checking specifically for science the way that we have such organizations as Politifact and FactCheck.org for general information.
Pertinent to this discussion, the National Academies Press yesterday released “Communicating Science Effectively: A Research Agenda.” As with all their publications, a pdf version is available for free download: http://tinyurl.com/jmyb6tf
Indeed, this is a growing research area. See https://scholar.google.com/scholar?as_ylo=2012&q=%22communicating+science%22&hl=en&as_sdt=0,49 A lot of this research is specifically focused on the use of science in contentious policy domains like GMOs, or for that matter renewable energy.
In addition to wanting to trust that studies are accurate and that scientists are unbiased in their presentation of information, the public wants science to be asking the right questions. Aside from safety (which may be a real issue), what about the nutritional benefits of GM foods versus natural and whole foods? Isn’t that an equally important question? The public wants to know that science is proceeding not just on an industrial agenda.
Obviously I vote +1 to Jamie and Phil’s points about researchers communicating directly to / summarising their work for broader audiences. The PNAS example is great and several other publishers are working on this, whether as part of the editorial process, or by encouraging authors to use post-publication services like Kudos, or by separately commissioning lay abstracts from services like ResearchMedia. Of course the sooner in the research process the broader audiences can be considered (even involved), the better – Julie Bayley’s recent post on “Impact Literacy” (http://blogs.lse.ac.uk/impactofsocialsciences/2016/12/07/a-call-to-build-an-impact-literate-research-culture/) summarises some good thoughts on that.
By coincidence there is another great posting today on the LSE Impact Blog entitled “The focus on better communicating certain ‘truths’ is misplaced: academics must improve their emotional literacy” (by Ruth Dixon from the Blavatnik School of Government in Oxford). Closing thought: “Academics are able to make their opinions widely known via social and traditional media – their ready access to such platforms shows that they are indeed part of the ‘elite.’ They should be aware of the unintended impacts that their remarks – and even their tweets and retweets – can have; alienating the very people whom they wish to influence, and making deeper engagement and mutual learning difficult or impossible.”
I forgot the link for Ruth Dixon’s post, sorry! http://blogs.lse.ac.uk/impactofsocialsciences/2016/12/20/the-focus-on-better-communicating-certain-truths-is-misplaced-academics-must-improve-their-emotional-literacy/
Unfortunately, generic discussions involving sampling sizes and extrapolations in bio-science have been frequently challenged for valid reasons including “white coat quasi-science”.
Global climate science observations showing alarming trends are also easily challenged by local observations of the meandering nature of the “polar vortex” .This can result from record setting local heavy snow falls which result in increased water evaporation from warming large bodies of water colliding with large scale cool air penetrating into lower latitudes. The politically biased respond with “How can this be a result of global warming?’ We will never resolve that issue with words, but the new GOES satellite images on Facebook may have a chance. https://www.facebook.com/NOAANCEIclimate/
This is a very timely issue, so thanks for doing this, Angela & Jamie!
No discussion about science communication is complete without mentioning the limited power that evidence has in changing beliefs. People believe things that are are likely to be untrue for a number of reasons, many of which are deeply emotional. There’s also a strong human tendency to believe in conspiracies of all sorts, which usually requires people to believe things which are unlikely to be correct.
I would like to see science communicators start to look at the spread of beliefs about the world in more epidemiologic terms. Where does a likely-to-be-untrue idea start, who is susceptible to it, who is spreading it, & how do we limit the exposure of susceptible populations to it? How do we build “herd immunity” to these ideas?
To take a comprehensive approach to science communication would be to work on creating conditions for the audience to be receptive to the evidence, which, in the long view, requires people to be educated in how to think, not what to think, but also for people to have opportunities so they don’t feel disadvantaged or lack control over their lives such that they displace the blame onto external groups or seek the comfort of authoritarian leaders. With a population that is receptive to evidence & which knows how to think critically, science communication will be a far easier task.
In the short term, I want to highlight the role of librarians, who are everyone’s local experts in helping people find the most credible sources of information. Once a wrong belief takes hold, it’s about as easy to cure it with rationality as it is to cure measles by clever argument, but if they get directed to the right information at the start, there’s a chance they won’t get ill in the first place.
Rural America, conservative America, isn’t that comfortable with change. As you know, Donald Trump courted rural voters’ resistance to change by promising to return America back to a period of “greatness.” Presumably, he was referring to less complicated times.
Yikes!
Saul Bellow once wrote something to the effect: ideas begin on the coasts and become threadbare by the time they reach the Midwest, so we can see through them”.
Let’s appreciate the difference between fashion, science and cold hard economics.
Most science writing is primarily written to entertain, sometimes it also informs but all too often it is written to advocate for a fashionable position. Rural people are less susceptible to fashion because it does not carry the social reward that it carries in urban areas. When everyone knows who you are, changing clothes or opinions is not going to help you much.
And therein is the clash between the urban and rural. Uburbanites all too often view their rural cousins as resistant to change and new ideas while rural people view their urban cousins as slaves to whatever idea flashes across their smart phones.
As for science, I can think of few occupations that require such a stranglehold on reality as farming. While academics can publish on Climate Change and GMO’s without career consequences, all it takes for a farmer is one bad decision in one bad year and banker pays a call.
Farmers are acutely aware of not only climate – but micro-climate. Almost all farmers rent fields and move their equipment around during the year. It is not unusual to have a one degree Fahrenheit difference between the northernmost field and the southernmost one. But as for climate change, they see a cognitive disconnect between what they hear from The NYT, WaPo and MSM about climate change and what their records show.
As for economics…. you bet they don’t like some of the changes they have been experiencing. I can name you six companies within a radius of 30 miles that have moved to Mexico in the last ten years. At the same time, the cost of healthcare in our area for a single middle aged person with no history of medical issues has risen from $550 in 2015 to $750 in 2016 to $1,100 for this coming year. The average wage around here is $12/hr. Do the math.
I see it like this…. the academics who study rural people and the reporters who write about them, remind me of white cops in a black neighborhood. Though the majority of them are decent and honorable people and are good at their jobs…. they will never understand the people who live there.
Did you really intend to use commas in this sentence? I think most scientists do NOT assume an individual result is valid. >>>>>’Scientists, who assume any individual result is a bulletproof fact, run the risk of designing weak experiments in the future.’
To expand on a point that I made at the beginning, there seems to be a deep misconception here. I have studied both the climate and GMO cases for several decades and these are honest debates. All sides marshal scientific evidence and scientists to their cause, so there is no problem of communication, nothing to improve in the sense that it might make the debate change or go away.
On the contrary, enormous energy is spent debating the science in great detail. So if the public concludes that the debate is real they are not wrong.
The basic fact, which people on all sides have a very hard time accepting, is that in complex cases reasonable people can look at the same evidence and come to opposite conclusions. The weight of evidence is relative to the observer. It is based upon a great many personal factors. There is nothing in the science of evidence (inductive logic) to suggest otherwise.