Ivan Oransky, MD, is a busy guy. A physician at NYU, the executive editor of Reuters Health, the treasurer for the Association of Health Care Journalists, and a professor of journalism at NYU, Oransky also runs the blog, Retraction Watch, which he founded two years ago today with Adam Marcus. Retraction Watch is a fascinating site that reports on retractions in the literature, with a special emphasis on those in biomedicine, and does not suffer fools gladly.
Ivan was nice enough to agree to an email interview. The results follow.
Q: Today’s the second birthday of Retraction Watch. How did it start? What led you and your colleagues to get it going?
A: At the risk of committing what might be considered duplication in scientific publishing circles, since I’ve told this story in places including The Node: By summer 2010, Adam Marcus and I had known each other for several years, freelancing for one another’s outlets and that sort of thing. We both had a healthy interest in retractions, mostly because as trade reporters we’d learned that there was almost always a good story behind the notice. Most journals didn’t seem particularly interested in providing details about those stories in retraction notices, nor in publicizing them – which of course only makes journalists even more curious.
So when I suggested in a phone call one July afternoon that I thought the kind of constructive fun I’d been having on Embargo Watch would probably translate into a blog about retractions, Adam said yes right away. We figured we’d tell the stories behind retractions, and how they were handled, to open a window on the scientific process, particularly how transparent and self-correcting it really was. We launched two years ago, and been thrilled and gratified by the supportive community we’ve given a home.
Q: Scientific communication is accelerating, access to scientific information has increased, and the rate of retractions is accelerating. Do you think there’s a link?
A: It’s always risky to draw any firm conclusions from what may seem like a large number – 400 retractions in 2011, with just 2,200 in the decade before that, including 2011 – but which is actually infinitesimally small in the more than 1 million papers published every year.
That being said, there are some signals that more eyeballs have meant more scrutiny, and more retractions. Evidence for that includes the fact that higher impact journals have more retractions, and that duplication retractions – aka, the imprecise term “self-plagiarism” – are on the rise. Plagiarism detection software, another kind of eyeball, has also played a role.
Others argue that the hypercompetitive funding environment has encouraged more researchers to cut corners. My gut tells me this is true, although hard to prove.
Q: Retraction, correction, and other notices of potential problems with papers can be opaque. Why do you think this is so? Should it change?
A: Adam and I wish we understood completely why some journals run such unhelpful retraction notices. One journal that publishes one-liners such as, “This article has been withdrawn by the authors” told us that the reasons for retractions are confidential – even as other journals publishing notices about the same group of authors details what went wrong.
Unfortunately, we’re often left thinking that the old boys’ network nature of science makes editors reluctant to say too much, particularly in cases of misconduct. It’s hard work to get to the bottom of a story. Adam and I should know. We’re sometimes told that editors can’t be expected to do too much, since they’re often volunteers. Tough. If you can’t staff a publication properly, you shouldn’t expect people to trust it very much.
We think this opacity should change for many reasons. The most important is that other scientists should know how the retraction affects their work. Was every last data point flawed? Or just some major ones?
We also think accurate retraction notices can help science improve, by describing what’s really going on in some quarters, and perhaps even deterring future fraudsters. That’s why we can’t understand why, in an effort to not discourage retractions, one journal is happy to give authors license to say however much or however little they want – even if that means hiding misconduct.
Finally, it’s about transparency and correcting the scientific record. If journals want us to trust what’s in their pages, they should be transparent throughout. Opaque notices don’t fit that model.
Q: Are you worried that Retraction Watch will become background noise? What are you doing to make sure it’s not?
A: Given just how many retractions there are – many of which we don’t have time to cover – it’s certainly possible that we’ll fade into background noise. But we don’t see any evidence of that happening as we cover more and more cases. Quite the opposite; our engagement and traffic has grown steadily, and pickups by the media, along with interview requests, become more and more frequent. There are now several blogs that might be loosely considered “pre-retraction” blogs, posting often anonymous critiques of papers, some of which end up being retracted – and on Retraction Watch.
That being said, we’re always looking for ways to be more useful and relevant. Last year, we introduced categorization, so that readers – including researchers who might make use of the data – could sort posts by journal, country, reason for retraction, and other criteria. Take a look at our right-most column for that dropdown menu.
This year, for our second anniversary, we’re introducing something we’re calling the Transparency Index, which we hope will be another way to judge journals alongside the impact factor and Fang and Casadevall’s Retraction Index. We describe this work in progress in an invited opinion in The Scientist. We’re also going to offer a membership to the site, so that our readers, in addition to the tremendous support they’ve already given us with criticisms, tips, and spreading the word, have the opportunity to support our efforts to create a robust and user-friendly database of retractions, corrections, and other updates to the literature.
Q: As a journalist, the embargo (or Ingelfinger Rule) seems to be a sticking point. Can you explain your perspective?
A: In the two-and-a-half years I’ve been running Embargo Watch, my thinking has evolved. I used to think of embargoes as the real problem, with the Ingelfinger Rule as a big-picture, nebulous but nefarious issue in the background. But what I realize now is that the Ingelfinger – or, as I’ll explain, the specter of Ingelfinger – that’s the real problem.
The threat of Ingelfinger – as opposed to the reality, which many journals have explained, at least officially, doesn’t prohibit pre-publication publicity as long as researchers don’t court journalists’ attention – makes many scientists think they have to choose between being open about their work, even to the taxpayers who’ve paid for it, and publishing a peer-reviewed paper in a journal so they can be promoted, or get a grant. It exerts its effects far upstream of a typical embargo, which lasts several days.
There’s a legitimate argument for how a several-day embargo helps reporters interested in giving readers context and clarity do a better job, one that for many people can be fairly balanced against withholding information from the public for some short period of time. Embargoes require careful management, and I’ve found plenty of examples of publishers and societies that have demonstrated they don’t know what they’re doing. But properly managed, it’s not unreasonable to think embargoes may do more benefit than harm.
Ingelfinger, on the other hand, is a half-step short of a gag order. When journals do everything they can to maintain their control over the flow of scientific information for their own benefit – with the willing participation of many journalists, I should add – they distort how science works. Journals and reporters overemphasize how final a finding is just because it appears in a published peer-reviewed paper, and that makes it even more difficult to admit that maybe the study had limitations, and to knock down hype. Alice Bell has written persuasively about moving science journalism upstream. I love the idea. But it will require breaking Ingelfinger completely. That’s something Vincent Kiernan, on whose embargo critic shoulders I am privileged to stand, has argued for over the years.
To be fair, if editors and producers were willing to allow reporters to include more caveats, including whether something had been peer-reviewed, and what that meant, we’d have better coverage. If there isn’t room for those caveats – and I haven’t had any complaints from Reuters Health clients for lengthening our stories to include them, by the way – then perhaps it’s best to use peer review, despite all of its problems, as a threshold for what reporters should cover. But I think there’s room.
Q: What’s your vision for improving scientific communication?
A: At the risk of sounding like a broken record: More transparency. Scientists, and the people who promote their work, should be more willing to acknowledge that research is a human enterprise. (And in many cases, a publicly funded one.) That means scientists will make mistakes, and some will commit misconduct. We hear frequently from scientists that watchdog efforts like Retraction Watch are undermining trust in science. But what undermines trust in science – or any human enterprise – is when its practitioners say “nothing to see here.”
Q: What’s the biggest problem we’re facing today as communicators?
A: I may not be the best person to answer this, since I don’t consider myself a science communicator. I see my role as a journalist as a scientific watchdog whose work may describe how well the scientific process is working. As a byproduct, my work may communicate science in the way science communicators think of doing that.
I think this is an important distinction because I find that a lot of what passes for science communication is too boosterish, leaving out limitations, for example. In health journalism, a look at Gary Schwitzer’s HealthNewsReview.org shows that journalism outlets are passing along much of that boosterism without any reporting or filters. I’m a big fan of that site, and any attempt to rigorously critique – and thereby hopefully improve – science coverage. The ability of people to comment on stories and posts, and publish themselves online, has also improved coverage. For those journalists who are more interested in getting it right than in making themselves look omnipotent – cue comparisons to scientists here – such robust engagement has also made reporting better.