An IKEA Store along Alexandra Road in Queensto...
An IKEA Store along Alexandra Road in Queenstown, Singapore. (Photo credit: Wikipedia)

Recently, there have been a number of alarmist articles taking swipes at journals and claiming science doesn’t work anymore like it should. Peer review has been called a “toothless watchdog.” A recent article started out with the three-word startler: “Science is broken.” The so-called reproducibility crisis continues to dazzle with its hall-of-mirrors of titillation and scaremongering. The journals publishing economy has been dragged through the mud for a variety of presumed sins. In a very strange article, Sci-Hub was defended by the CEO of Creative Commons, while in the previous week a fine medical journal was looked at askance.

It’s almost as if there’s nothing but upside for people hating on science and journals these days. Science-bashing and journal-trashing has transmogrified into sport.

What is at work here?

The topic is attractive, to begin with. There’s a lot of interest in science, as science directly affects our lives in many ways. But science can be idealized, which makes it ripe for stories about feet of clay. Reacting to these “how the mighty have fallen” stories is difficult for those of us closer to reality and familiar with seeing great intellects’ human side. If empiricism and reality-based thinking were broken or failing on a major scale, or were in any way broadly compromised by misbehavior in the ranks, we’d share a real concern (and, in many ways, we do — about misaligned incentives, over-emphasis of certain metrics, and so forth). But generally, science seems to be working pretty well these days, with the Zika virus very quickly understood at a basic level, with the confirmatory discovery of gravitational waves, and with a Canadian Prime Minister who can explain quantum computing. Even though quirky and unpredictable humans bring it to life, it seems like science is quite functional. Like any living system, it and its forms of accountability continue to evolve.

Don’t get me wrong — science and academic publishing face real challenges. But these real challenges are often obscured by the hyperbolic misrepresentations coming from some motivated critics.

Other stories — how academic publishing works, the scandals that occasionally emerge from journals — provide great human interest fodder. The academic publishing economy is so different from the trade publishing economy that it’s easily portrayed by contrast as unfair or unjust in some way. (If you ever want an interesting thought experiment, try flipping the script on this, and see which one you think is actually better — and you might, as you do this, consider whether the trade world is heading our way on its own.) Scandals in publishing and academia are uncommon bordering on the rare, but they can be juicy, and certainly show how human failings can undermine the scientific endeavor — whether it’s research fraud, sexist scientists, or ham-fisted academic leaders.

We participate in this, tacitly if not actively. Publishers, editors, and scientists aren’t generally cynical, but sincere. Any such concerns strike a chord because, if broadly true, we’d all be really worried and upset. Failure to make progress, ineffective screening systems, and exploitative pricing don’t motivate the people in this industry generally. From scientists to academics to editors to publishers, most of us devote our careers to these important aspects of furthering human knowledge, complete with long hours and personal sacrifices. We take criticism seriously, and respond slowly and thoughtfully. We spend months, if not years, discussing any critiques lobbed into our world, to make sure we aren’t missing something or making mistakes. We’re currently in the midst of, “Reproducibility — Year 2: The Letdown.” We’re up to Season 18 of the Open Access Saga, with no end in sight. We take all these things seriously and with an academic stride, and that is sometimes to our detriment. By the time we figure out the actual scope of concern and respond, the faster-paced worlds of digital journalism, social media outrage, and public opinion have moved on and given us something else to worry about. Or they’ve latched onto the initial inciting events with a vice-like grip and won’t let go, even in the face of reason and evidence. Remember how long the “vaccines cause autism” garbage took to go away? Oh, wait. It hasn’t. (That’s 20 years and counting.)

There is a price to pay for all this noise obscuring some of the real concerns facing science and academic journals and libraries and researchers generally. There are also consequences to these salacious articles coming out in such volume and frequency.

Retraction Watch is perhaps the most accomplished and sophisticated outlet in this cottage industry of journal gazing. It certainly has its admirers, myself among them. So this is a friendly but skeptical assessment of its contributions. The writers generally provide good journalism and an interesting stream of content — but not always. The article mentioned at the outset portraying peer review as a “toothless watchdog” was written by the founders of Retraction Watch. It was a really good bad example.

While ostensibly positioned as a watchdog of the watchdogs, Retraction Watch gets most of its stories from what journals are already dealing with. In other words, it’s usually the second or third dog to bark. And then it judges how well the first dog did its job.

Retraction Watch can go too far at times in the name of transparency, and there may be a price to pay for a TMZ of retractions. Earlier this week, for example, a correction at NEJM was explained in excruciating detail. One author on the paper was found to have had other correction notices in other journals, and there’s a little bit of a salacious trail laid suggesting that maybe, just maybe, this researcher is a bad egg. But it was just innuendo — a little suggestive, a little tawdry, a wink, a nudge. No evidence is presented or quarter given. A small set of corrections is grounds for free-floating suspicion at Retraction Watch. Perhaps, instead, it’s the sign of a researcher who’s diligent enough to revisit the data and manuscript again and again, and contact editors requesting correction even if it means egg on their face. Perhaps, instead, it’s editors receiving feedback and doing double-duty on a paper to correct the record. Perhaps, instead, it’s the community attending to maintaining the integrity of the scientific record. But none of those is nearly as exciting or salacious as suggesting something more devious is at work. Good intentions are dull.

While it’s interesting to see a few retraction tales pulled together, it’s not clear that Retraction Watch is actually adding much to the functions around retractions and corrections. Editorial offices tend to communicate about these things, and most of what Retraction Watch finds is exactly this — a community that is managing itself pretty well. They are mainly reporting on normal journal functions, giving these a journalistic flair. Is this additive? Or does it subtract? Do corrections and retractions, as Fiona Godlee, editor of the BMJ, states in a recent article, represent “a positive sign”? Does Retraction Watch capture this spirit?

There is a potential downside to the daily litany from Retraction Watch. We want journals to issue corrections, and to retract articles when necessary. But I’ve seen editors second-guess the wisdom of issuing corrections or retractions — because of the unwanted media attention, because aspersions will be cast, because researchers will be subject to lurid speculation. When this is happening, it’s clear that a tool like Retraction Watch, which is positioned for transparency, is having a chilling effect. And how do you correct that?

But retractions aren’t the only topic being warped. Take a harder look at the “reproducibility crisis” than most popular media critics ever will, and you see a lot of complexity. Even the concept of “reproducibility” is tough — “replication” is different, some studies are so large and complex that they can never be replicated (while the biological mechanism they elucidate, for example, might be reproduced in another setting), and the more complex the study group, the harder both replication and reproducibility become. But before we accept the framing on offer, we can ask whether the analyses of the “crisis” were themselves strong enough to merit the label? Can those studies claiming irreproducibility themselves be reproduced? Is it actually possible to document all of the variables in an experiment? Did anyone catch that the most reproducible studies were the best to begin with?

There are also the broader and inherent difficulties when it comes to humans writing and following instructions. As a former technical writer for software, I’ve lived this. There is almost no instruction that can’t be misinterpreted or thought by some other person to be incomplete or confusing. Even highly trained and supervised surgeons can remove the wrong body part. A great essay about reproducibility illustrates how even building something as well-established and clearly feasible as a clock can lead to problems.

Does Ikea have a reproducibility problem if my Poang chair doesn’t look like the picture?

What do all these complexities and caveats tell us? Not the kinds of things that might generate thousands of clicks on a headline. Simple messages travel better in social media channels, and with traffic a major incentive for digital outlets surviving on advertising dollars, it’s no surprise nuance is often replaced by sensationalism.

What’s being fostered is an artifice with no answers — a dissatisfaction with science, with research reporting, and with independent arbiters of information. The risks of fomenting dissatisfactions like these without viable, positive, alternative solutions are significant. They can make the problems appear to be insoluble, the scientific community appear inept. Worse, they could keep our society from addressing some real problems that might actually hurt science and make research reports less reliable:

  • Public funding of science remains inadequate. In the US, only the Department Of Energy (DOE) has funding restored to the levels of 2012. The Environmental Protection Agency (EPA) is being gutted, while we have lead in drinking water and continuing problems with lead paint abatement, not to mention growing problems in other areas of public health. We’re training more scientists, but not providing them with research funds, risking a “lost generation” of talented and well-trained young scientists who are leaving academic research for corporate research, where they have more funding and freedom, but society has less of a role in driving a common research agenda and funding discovery science.
  • Publishing serves a vital role by incentivizing information sharing between and from scientists, allowing scientists to focus on research rather than drab publishing tasks, maintaining a firewall between vested interests and scientific reports, and making discoveries public. At 0.5% of the overall research spending around the world, it provides these major services and benefits at very low costs. Imagine if findings about how the Zika virus or CRISPR works were unpublished and proprietary, not available to other qualified scientists for study, preserved by a multi-national drug company? Imagine if scientists still had to write their discoveries out in anagrams and codes to preserve claims of discovery? As David Wootton writes in “The Invention of Science“:

. . . there can be no science until there is a reliable way of publicizing knowledge.

  • Our ability to weather these distractions and attacks may be admirable, but at some point, we have to find our way to explaining what we do, why it matters, and all the things being done to make it better. A list may help, but it’s not enough. It may be that we’ve lost the thread on our own story, because we believe science is invulnerable, immutable. The loudest critics are like ungrateful passengers on a plane — no longer amazed by the miracle of heavier-than-air flight, irritated beyond reason that the armrest isn’t wider or that their overhead bin doesn’t have room for their puffy coat. Like airlines, science works so well that it risks neglect and attracts lazy derision, despite having a success rate higher than nearly anything else we’ve attempted as a species.

To shift analogies, successful science is like successful preventative medicine — it has no face, no “poster child,” no memorable image. It’s a harder story to tell, but a very common one. Most people live a long time. Most published scientific findings are done well enough to move things forward. Most papers require no correction or retraction. Most published experiments that are interesting enough to warrant attention can be replicated or reproduced in some fashion, even if there are differences in equipment or materials. We are in an amazing era of astrophysical, biological, and ecological discoveries. We are seeing more transparency. We take it all for granted.

The crisis facing science and journals is not what we’re being told. It’s somewhat worse. It’s a growing lack of respect for the basic necessity to pay for anything science does — less funding of research, less funding of libraries, less funding of journals, less funding of trainees. Where are these stories? Where are the voices compelling university presidents to fund libraries at a share of university budget commensurate with 1992? Where are the voices demanding that Congress restore funding for research and development to 2006 levels? Where are the voices advocating for a more robust editorial and review ecosystem so that fewer errors occur and more trained eyes see each paper both before and after publication so that we process the best science forward? Where is the transparency system that applauds researchers for correcting errors and retracting problem papers?

The absence of these stories may be cause for legitimate worry.

Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

View All Posts by Kent Anderson

Discussion

9 Thoughts on "Sensationalism or Legitimate Worries? Examining the Cottage Industry of Journal Criticism and Science Alarmism"

Is it just me, or is there a middle way between the Smiths and the Andersons?

I’m suddenly reminded of what John Adams once said about another Founding Father, Thomas Paine: “he was a better hand at pulling down than building up.” Investigations into the failures of scholarly communication are necessary and important, but only if accompanied with suggestions for improvement. By now we’re all aware of the problems. But what of the solutions?

This does not seem the right way to think about the issue, in my opinion.

We (journals, the media, the public, and researchers) need instead to internalize that:
1) A paper is complete when published, however the underlying research is NEVER complete
2) Reproducibility is important, since taxpayers pay for science, however you get what you paid for: if the funding agency paid for studies which involve 50 replicates and the scientist asked for 500 in the grant application, there’s clearly a mismatch.
3) Researchers live in a house of glass, the sooner they grasp this, the better. Getting testy because someone else found a flaw in your paper is a bad idea for a scientist.

“Retraction Watch can go too far at times in the name of transparency, and there may be a price to pay for a TMZ of retractions.” I love this quote. The hyperbolic headlines are intentionally incendiary and I never see criticism of this muckraking approach.

Comments are closed.