Spinning top, bought in Prague
Spinning top, bought in Prague (Photo credit: Wikipedia)

There are plenty of reasons to worry about whether scientists exaggerate their results, including direct evidence and indirect evidence that they do (suspiciously lumped p-values and problems with reproducibility). The growing reliance on soft money has not only exploded author lists — everyone needs credit because everyone needs their next grant — but has also increased the pressure for positive results.

Now, it seems we can add one more bit of evidence of exaggerated claims — an inordinate amount of “spin” in abstracts, which finds its way into press releases and is amplified by subsequent reporting.

This is important because more than half of US adults follow health news closely, and 90% of the public gets most of its information about science from mass media outlets.

The study, published in this month’s PLoS Medicine, continues a trend of strong studies around bias and press releases, but creates a clearer link between exaggerations in abstracts and later amplification in the media.

The definition of “spin” is a crucial first step to identifying it, and the authors of the latest study boil it down easily:

We defined “spin” as a specific reporting (intentional or unintentional) that emphasizes the beneficial effect of the experimental treatment.

This picks up on an earlier study that defined spin for non-significant primary outcomes, which was a little more pointed, substituting “for whatever motive” for the more banal “(intentional or unintentional)” wording used in the current study. I prefer to think scientists aren’t unintentional in their reporting.

The results aren’t encouraging. Focusing on randomized controlled trials (RCTs), the authors found that while 54% of the 70 articles studied showed positive results in their full text, fully 79% of press releases extolled positive results. This additional 25 percentage points largely came from spinning studies with neutral results — only one study with non-beneficial effects was “spun” to look positive compared to 16 RCTs with neutral results.

Interestingly, the general medical journals were far less likely to have this happen. Most of the overestimation of results came from specialist journals (45% vs. 6% for general medical journals, p<0.001). Smaller sample sizes seemed to play a role in this, which invites the question of whether these small trials received more funding based on biased results reporting. This study doesn’t tackle that issue.

Other variables weren’t important — for instance, it didn’t matter who funded the studies, who wrote the press release, or what kind of experimental treatment was described.

“Spin” actually accelerated into the news cycle — while 41% of abstracts had “spin,” 46% of the matched press releases had spin, and 51% of the news items had spin. So while scientists started the spin, they were ably abetted by PR accomplices and journalists themselves. Soft money pressures might also be insinuating themselves into the institutional consciousnesses at various research entities.

As the authors write in their discussion:

[the tendency of press releases and subsequent reporting to emphasize the benefits of experimental treatments] is probably related to the presence of “spin” in the conclusions of the scientific article’s abstract. . . . [and] may be responsible for an important gap between the public perception of the beneficial effect and the real effect of the treatment studied.

To me, the authors of the current study could have focused more on the problem residing almost exclusively in specialist journals rather than general medical journals. I know from experience that the editorial review and peer-review at the large general medical journals is much more intense and rigorous than it is at smaller journals, with language in abstracts routinely excised as repeated statistical and methodological reviews peel away the gloss of the submitted paper.

Boosterism outside of medical research isn’t unheard of — cold fusion, alien bacteria, faster-than-light energized particles, and so forth. But in many of these cases, the spotlight thrown on big claims helps to quickly show them to be false or at least limited. With medical research, the problem is more intractable because overblown findings can prompt people taking drugs or seek other treatments for no good reason, embracing approaches that might do more harm than good. In addition, repeating or reproducing studies can take years, or be simply impossible.

The locus of trouble this study found in specialist journals –which are usually not as rigorous as the major medical journals, simply from a resourcing standpoint — suggests we need to do more at the periphery as far as validating results, toning down language and claims, and ensuring there wasn’t any monkey business. Whether and how these factors square with initiatives for faster publication, “publish, then filter” approaches, and mega-journals should be a point of discussion and concern for everyone in science, and not something we assume will take care of itself through some sort of market magic.

Enhanced by Zemanta
Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

5 Thoughts on "Are Scientists Themselves to Blame for Exaggerated Claims in Science Journalism?"

I forwarded this to the head of the National Association of Science Writers. Really intriguing.

The PLoS article focuse on reping of randomized controlled clinical trials, but the spin here is relatively staid compared to the hype that comes out on preclinical studies. I wish I had a nickel for every press release from a university about an obscure engineering professor who just revolutionized cancer care with their new device. Many of these press releases are not even backed up by a peer reviewed publication in any journal.

The assertion that the problems is with specialty journals does not necessarily hold true. Broader circulartion journals are not likely to publish earlier phase studies, which are smaller, have broader confidence intervals, and more room for interpretation or extrapolation. The issue is just as likely to be the type of research being published as it is where it is being published.

One issue that you do not raise, and I do in my story in the BMJ http://www.bmj.com/content/345/bmj.e6106, is that the abstract often is not peer reviewed, even though it is the most important part of the paper. One relatively simple and easy way to reduce spin is for editors and reviewers to give abstracts the attention they merit.

Comments are closed.