A recent Pew Research Center survey on American’s thoughts on genetically modified (GM) foods provides a lot of information about how people think of GM foods as well as organic foods and healthy eating habits in general. Today, I want to focus on the parts of the survey that deal with what Americans think about the science behind GM foods. This survey closely mirrors a survey they did earlier in the year about climate change.
Peeking into the survey, there were a few stand-out points about trust in science.
- 35% of Americans think that scientists’ don’t really understand the health effects of GM foods.
- 53% of Americans believe about “half or fewer” of scientists believe GM foods are safe to eat.
- 21% of Americans say they do not trust scientists to provide “full and accurate” information on the health effects of GM foods.
- 81% said they believe scientists’ research on GM foods are based on best available evidence most or some of the time.
- 80% said they believe scientists’ research is based on scientists desire to “help their industries.”
To try to make sense of this and review how the dissemination of research plays into these public perceptions, I had a conversation with Jamie L. Vernon, Ph.D., Director of Science Communications and Publications, Sigma Xi, and Editor-in-Chief, American Scientist. He recently launched the Research Communications Initiative to provide services to researchers who wish to directly communicate their work to the public. Before working at Sigma Xi, Jamie was a AAAS Science and Technology Policy Fellow and an ORISE Fellow at the U.S. Department of Energy (DOE), where he developed strategies to measure and communicate the economic impacts of the department’s investments in clean energy technologies. He has a B.S. in Zoology from North Carolina State University, a M.S. in Biotechnology from East Carolina University, and a Ph.D. in cell and molecular biology from The University of Texas at Austin.
Angela: Jamie, my attention was drawn to this survey because NPR did a story about it and the headline was “Americans Don’t Trust Scientists’ Take On Food Issues.” Ouch. Do any of the statistics above surprise you as a science communicator?
Jamie: These days nothing surprises me when it comes to public opinion. Having said that, my tendency to seek the silver lining leads me to find good news in that 21% number. This statistic is consistent with public polls that show the public generally “trusts” scientists. I suppose the result that I find most troubling is the 80% who believe scientists are trying to help “their industries.” This goes to the heart of the problem of public trust in science.
When the public conflates science with the advancement of industrial agendas, we risk losing the perception of objectivity that makes science our most effective institution for solving societal problems. Equating scientific support for the development of gene modification techniques with approval of certain corporate business models, for example, has led to tremendous pushback against agricultural engineering research. Past mistakes by scientists and science communicators on this issue have contributed to the perception that the scientific community is overly focused on “industries.” Scientists should avoid crossing the line between explaining the benefits of genetic modification and defending certain corporations’ use of the science to create and sell products.
We should explain more clearly that scientific progress is agnostic to its application. However, the public needs to know that advancement of science is an economic engine, driving innovation that leads to technologies we rely on everyday.
Angela: “Help their industries” was a weird phrase to me, but yes, that is a concern. The uncertainty around government funding in the not-so-far off future (in the wake of the US election results and the Brexit vote in the UK) may lead to research funding deficits that are met in part by industry. The other item that struck me was that 35% of Americans think that scientists don’t really know whether GM foods are safe and many believe that scientists are not in agreement on GM foods. This is similar to what we are hearing now with climate change. But this is really not true. I wonder how much damage is done when mainstream news organizations cover food and climate related stories that seem to overturn each other on a weekly basis. Eat more chocolate. No wait, eat less. Red wine is harmful versus my preferred study that says red wine is better than exercise. Are we (the science communication community) pushing out too much information on studies that have limited significance? Does this lead to the public feeling like scientists don’t know what they are talking about?
Jamie: Right. Well, I’m not surprised to learn that the respondents think scientists neither know nor agree on whether GM foods are safe, despite recent polling data confirming that 88% of scientists believe genetically modified foods are safe. As you point out, the issue of GM food safety suffers from the same media failures as climate change, namely the false balance problem. The media’s inability to cover these complex issues in a way that conveys the statistical imbalance of opposition versus support has thwarted our ability to mount a meaningful response to climate change. Likewise, people who doubt broad scientific consensus on the safety of GM foods have slowed the deployment of new agricultural solutions. This delay impedes mitigation of environmental damage caused by large-scale farming and slows the distribution of drought- and disease-resistant crops to areas of the world that most need them.
Your second point is also less of an issue with science and more of a problem with media coverage of science. That is, the mainstream media too often reports new scientific results as if they invalidate all prior knowledge. A food study that suggests that chocolate is good or bad for you is only as good as the design of that particular research. The media seems to invest very little in communicating the relative significance or lack thereof for any individual study. For this reason, contemporary mainstream science reporting tends to legitimize and, in some cases, encourage poor nutritional decisions.
The scientific community reacts differently to newly published research data. Scientists who assume any individual result is a bulletproof fact run the risk of designing weak experiments in the future. So, they attribute a degree of uncertainty to the data and design future experiments to test the validity of the new result. The public doesn’t have the luxury of testing the result, so they rely on the media to accurately disseminate actionable information. For this reason, the media should be more disciplined about how they share new research data.
I find it personally insulting when reading a story based on a single study that suggests I need to change my lifestyle. On the other hand, on issues such as GM food safety, we’re talking about decades of research conducted by hundreds of scientists and supported by thousands of scientific papers all pointing to the same conclusion: GM foods present virtually no threat to public health. There’s reason to make confident life-choices based on this information. Sadly, the mainstream media outlets seem to lack the capacity to distinguish between these two types of scientific reporting. I believe this imprecision contributes to public doubt in science by undermining well-founded scientific conclusions.
Angela: Talking about the dissemination of research, do you think the public cares about whether something is peer reviewed by a journal? Are they okay with accepting science shared as a preprint? OR, are they completely ignorant of review? I have been concerned about over-blown criticisms about the peer review process and what effect these criticisms may have on the public’s trust of science output. While I see an important role in evaluating the long held processes and support innovation and experimentation in different formats for peer review, it seems to me that headlines shouting about peer review being in crisis or science (as a system) being broken is not really helping the cause.
Jamie: I think there’s a broad range of public understanding when it comes to scientific publishing. I suspect that few non-scientists can distinguish between a “preprint” and a formal publication. This calls for clear designation of the type of information being shared with the public.
Some folks are familiar with impact factors and peer review, though. Individuals dealing with a health crisis, for example, will engross themselves in the literature, pitting one treatment against another. I’ve heard stories of patients educating their doctors about new research on their illnesses. Of course, these people rely on traditional indicators of reliability and may be not aware of weaknesses in the system that warrant caution.
The proliferation of specialized scientific publications, as you know, raises concerns about reviewer fatigue, shallow pools of expertise in certain areas, and the internal politics of science, including racial and gender biases. These are legitimate challenges, but it’s not clear to me whether they amount to a peer review crisis.
Advocates of open peer review might see errors in the literature as justification for revamping the entire system. In my opinion, peer review is fairly effective at detecting shoddy results based on poor experimental design, and it is less successful catching “wrong” results based on good experimental design. The latter of which should be tolerated by a healthy scientific enterprise.
So, when I hear that there’s a peer review crisis, I wonder what we, the science communication community, expect from publishers. If we expect scientific purity, meaning zero tolerance for “wrong” results, then I know we’ll never be able to design a system that meets that goal. Rather than setting unreasonable expectations and calling it a crisis each time there’s a glitch, it’s incumbent upon us to empower the public with the proper degree of skepticism through our reporting.
Angela: It does seem like the pendulum is swinging wildly from “do more peer review” to “throw it out and just post stuff online.” The intent of peer review is to use experts in a field to evaluate research output. Authors scamming the system (of which there are relatively few) can be difficult to catch. One thing that I find non-researchers confused about is that scientific literature is largely self-correcting. Researchers know that. Jane Q. Public does not. Going forward, I do think there are some steps that could be taken to improve trust and communication.
One step forward would be more transparency in the entire process. My vision for output in the future is that everything will be available, discoverable, and linked. Grant applications, study design, perhaps a level of notes during the study, conference presentations, abstracts, data, figures, preprints, peer review reports of journal submissions, published journal articles, and post-publication discussion could be available and linked to each other to show the progression of the research.
Preprints in biomedicine are presenting a new challenge for reporting. I see the appeal for scientists to get their work out in the open quickly but if the mainstream press wants to cover that study, it should be clear that the work was not peer reviewed. It would also be immensely helpful if authors returned to any preprints posted to include links to the final papers. Yes, I know that this means going back a while and remembering to do it, but this may be one of the responsibilities you take one when you post a preprint.
What do you think Jamie? What ideas do you have to make science communication work better going forward?
Jamie: I’m not sure any of the talk about preprints and transparency has much influence on public opinion. Generally, I think the average American views “science” as a monolithic institution. They just expect scientists to do their jobs well and ethically.
Unless there are individual circumstances that warrant further digging on a specific issue, I believe the public assumes the messages they hear from the media accurately reflect the scientists’ intent. In other words, when the media reports on new results, the general public assumes the report is backed by the scientists.
This means scientists have a responsibility to ensure that reporters get the story right. I’ve advised scientists who speak to reporters to answer their questions, then ask the reporter to repeat back what they heard, and make corrections as needed. That’s one way to improve science reporting.
The problem of trust is much bigger. Survey data suggests that U.S. public trust has generally remained stable from 1974 to 2010, except for respondents identifying as conservative. This is troubling, but not surprising. We touch on the possible reasons why conservatives, in particular, have lost confidence in science in the January-February 2017 issue of American Scientist.
In our inaugural Science Communication column, Matt Nisbet identifies gaps in the current communication landscape that may be contributing to the erosion of trust among certain groups. He argues that scientific advancement is perceived to largely benefit an elite segment of the population, while those suffering from decades of economic hardship feel excluded.
Nisbet suggests that financial investments in local media outlets will help by facilitating in-depth discussions of complex scientific issues that are oversimplified on the major news networks and by supporting conversations about the contributions science has made to the lives of average Americans. He also encourages scientists to address issues of inequality by calling for affordable education beyond STEM fields.
I think Nisbet is on to something. Science does have an elite element to it. I get that impression every time I visit my hometown in rural North Carolina. Science is just not what people talk about there. And when they do, it’s usually in the context of some “change” in the way we do things.
Rural America, conservative America, isn’t that comfortable with change. As you know, Donald Trump courted rural voters’ resistance to change by promising to return America back to a period of “greatness.” Presumably, he was referring to less complicated times. A recent study revealed that even messages about climate change harkening back to the way things used to be proved to be more persuasive with conservatives than future-looking narratives. So, when people who reside in these communities hear about artificial intelligence, robotics, genetically modified crops, and stem cell research, it’s understandable that they begin to question whether scientists are helping them or serving industry elites.
For years, I’ve encouraged scientists to step out of the lab to talk to people about the positive effects science has on society. More importantly, it’s worth it to let people know that scientists live in their communities and prosper and suffer just like everyone else.
Angela: We have certainly been talking a lot about “bubbles” lately. I guess that’s why I wanted to chat about these issues around science communication and how our “bubbles” provide a weird sort of distorting visual of what we hear about. On the publisher side of things, we are observers of the push and pull. Who are journal articles written for? Who should have access to articles? How should the dissemination of information be improved and how do we pay for it? It’s not just publishers debating these questions. Researchers seem not to have come to a consensus either.
Thanks for coming into the Kitchen with me. Readers, what are your thoughts on how scholarly communication can break through the bubbles? How can the scholarly publishing ecosystem improve scholarly communication?