At the recent STM Innovations meeting, a number of new initiatives were discussed along a similar theme — promoting the works of authors, essentially marketing their papers to drive citations, public awareness, and chances for academic recognition. While one aspect of these networks is a basic narcissism (my profile with my picture about my papers and my data promoting my career), another aspect is that in an increasingly crowded publishing landscape suffering from filter failure, promotional efforts have stepped in to create awareness and differentiate one study from the others.
There may be a downside to all this promotional activity. Recall the 2010 NASA press release announcing, in heated tones, the potential discovery of arsenic-based life on Earth. Not only did the underlying research prove to have problems and at best suggest a modest finding, but the wild promotion of it brought reactive shame on NASA, the journal in which it was published, and the researchers themselves. Without such heady promotion, it’s conceivable that the paper’s faults would have been revealed in a quieter tone and without public disgrace, while the core findings may have contributed in an incremental fashion to our understanding of the world. As it is, the incident lives in infamy.
A recent study in the BMJ suggests that the downsides of PR and promotional activities around scientific reports may be more pervasive and consistent than the occasional black eye. The authors studied 462 academic press releases from 20 leading UK universities and compared these to 668 news stories based on these press releases and their related studies. They found that 40% of the press releases contained exaggerated claims about causality while 36% contained exaggerated inferences about the significance of the research to human or animal research. Most of these exaggerations found their way into news stories, with 81% of the exaggerated causality claims making the news and 86% of the exaggerated claims of importance carrying over. When press releases were not exaggerated, only 17% of news stories contained exaggerated claims.
The authors took a good amount of time evaluating each instance, with each set of press release-to-paper-to-news-stories requiring 3-4 hours to code for their dataset. The authors also note that they did not include journal press releases, which can be issued on occasion separately from academic press releases. They also did not study press conferences or the increasing trend of splashy meeting announcements linked to synchronized online publishing by major journals.
This study was carried out among UK universities and press outlets. It would be interesting to compare that to US counterparts, because my impression is that US academic institutions and press outlets have been at this longer. Whether this would lead to lower rates or higher rates is an open question. My hunch is that US health reporters are even more dependent on press releases than are their UK counterparts.
One interesting aspect of this phenomenon is the bias inherent in it. While we’re accustomed to seeing conflicts of interest and bias among authors and editors, this dimension shines a light on the inherent bias academic institutions have around the research published by their faculty, affiliated researchers, and grant awardees and recipients. The increasing reliance on soft money must certainly make the urgency around promotional activities all the greater. These aren’t just authors competing for grants, these are now large, well-funded institutions competing for grants through their authors and the research they produce. It’s not surprising that employees would feel the need to goose findings in the media under such conditions.
An accompanying editorial in the BMJ does a nice job of laying out the scope of the issue beyond this study, while proposing some accountability options:
Accountability is straightforward: all academic press releases should have named authors, including both the press officers involved and the individual named academics from the original academic paper. This would create professional reputational consequences for misrepresenting scientific findings in a press release, which would parallel the risks around misrepresenting science in an academic paper.
Promotion and marketing of published papers is becoming an even more important aspect of academic life as the sheer volume of papers increases and attention spans shrink. Services like Kudos seek to help authors market their published papers more effectively, while much of the altmetrics movement has been about measuring social media marketing and uptake. The utilization of pre-print servers is another form of marketing papers, putting them out in a form that creates early awareness and stakes a claim.
The pressure to publish already leads to untoward behaviors like plagiarism, authorship fraud, and research fraud. And, because positive results are easier to get published, there is a long record showing how authors exaggerate claims and skew conclusions to increase their chances of being published in high-impact journals. Now we are seeing the results of an increasing pressure to promote published works, which seems to be leading academic institutions to join the ranks of self-promotion through exaggeration.
Educating the press officers of various academic institutions about the risks emanating from these practices is one approach to fending it off. The journalists should not be left off the hook, however, as it appears they accept assertions and claims in press releases without actually reading the underlying studies. Fundamentally, weaknesses in journalism and the news business model must be contributing to the filter failure between university press offices and news outlets.
The biggest concern about how the scientific record can be warped and public perceptions distorted is how this exploits and exhausts an underlying faith in the scientific endeavor. We may have crossed an inflection point in this regard. The public already scoffs at most nutritional advice because so many claims have been exaggerated or contradictory. From vaccination to global warming to economics, getting beyond “the controversy” remains a problem, and reliance upon and manipulation of PR efforts is a tool that can stymie intellectual engagement. So when we see science news distorted for self-serving purposes by major academic institutions, we are seeing yet again — just as in the misuse of impact factors — how academia itself has become part of the problem. And that gives us all a black eye.
6 Thoughts on "Exaggerated Claims — Has "Publish or Perish" Become "Publicize or Perish"?"
These are examples of what I call Funding Induced Biases or FIBs. I am presently developing a taxonomy of FIBs and there are quite a few other types as well. The case of press releases is a fairly robust research area. A Google Scholar all text search on “exaggeration” and “press releases” for just the last five years gives over 17,000 hits. A significant fraction of the hits are actually studies of press releases and exaggeration, including in science. Perhaps promoting the research on hype would diminish the practice.
“Recall the 2010 NASA press release announcing, in heated tones, the potential discovery of arsenic-based life on Earth.”
Wasn’t the results of this study also published in Science about 6 months later?
I think press releases should be taken for what they are, public relations. The ones I have seen at my university are written by public relations people whose job it is to promote the university.
I think an even bigger problem is when a seriously flawed study slips through the peer review system of a very prestigious journal perhaps to some extent for similar reasons. The potential benefits of publishing such a dramatic finding in exposure and increase impact is hard to pass up and make it easier for editors and reviewers to overlook the warning signs of a potentially flawed study.
Fair points, but perhaps missing the point. While flawed studies do at times find their way into prestigious journals, this is a relatively rare event, and one that is corrected fairly quickly due to the prominence of the venue. That is, it took hours for the arsenic paper to be called into question.
More pervasive is the distortion of research findings in press releases, and these distortions ending up in news outlets a vast majority of the time, according to the study discussed here and plenty of other evidence, both published (see references in the paper and its accompanying editorial) and anecdotal.
It’s fine to say that “press releases should be taken for what they are, public relations,” but far too often press releases are turned into news accounts, where they are taken as something far closer to the truth by lay readers, and even by scientists. This transformation of promotion into authority via news publication introduces issues that aren’t easily addressed, and which are more corrosive, I believe, than the occasional misfire at a major journal.
Exaggeration, the topic of the research article which is the subject of this post, and deeply flawed studies are rather different things. There is in fact something called “publication bias,” carried out by journals and peer review, which is a subtle form of hype. It too is a robust research area and is in my FIBs taxonomy. Google Scholar returns over 25,000 hits on the term for the last five years.
This seems entirely in keeping with an ethos emphasising hyperbolic claims for the prestige indicators accruing to the work of each inmate. Publish or perish is still the axiom to watch. Derek Sayer is on the money.