As a system for correcting science, article retractions represent a fast, democratic, and efficient mechanism for alerting scientists of invalid work, a new study of biomedical papers reveals.
The article, “Governing knowledge in the scientific community: Exploring the role of retractions in biomedicine“ (Research Policy, March 2012) Jeffrey Furman, Kyle Jensen, and Fiona Murray report on a study of 677 article retractions identified in MEDLINE between 1972 and 2006. What makes their study unique from nearly all prior attempts to understand the nature of retraction was their use of a control group — a comparison against which to study the performance of retracted papers. Their control group was assembled by selecting nearest neighbors — papers published just before and after each retracted article.
Just as no control group forms a perfect comparison to the study group, the researchers also employed statistical techniques to estimate, and control for, the effect of several variables under investigation, like the geographic location of the corresponding author. Their results can be summarized in three main points:
- The retraction system is fast — While some false articles remain in publication for years before retraction, nearly 15% of all retractions took place during the year of publication and 50% within 24 months. There is also evidence that the delay between initial publication and retraction is getting shorter over time.
- Retractions are democratic — There does not appear to be bias due to geographical location or institutional affiliation. Outside of the United States, retraction numbers are relatively small, however, limiting their sensitivity of analysis.
- The effect of issuing retractions on article citations is severe and long-lived — Compared to control articles, annual citations to retracted articles drops by 65% following retraction. Citations decline by 50% during the first year post-retraction, and 72% by year 10.
Compared to the control group, retracted papers were more likely to be highly-cited in their first year. They were also more likely to be authored by researchers located at top US universities. Furman considers various speculations on why this might be the case — research produced by scientists at top universities tend to receive both a higher number of early citations and more intense scrutiny; the pressure to retract false papers is much greater at prestigious US universities; and lastly, articles that are later retracted may simply attract considerable debate about their veracity soon after they are published. Whichever precursors lead to article retraction, the effects on the citation record are clear:
[W]hen false knowledge is identified and signaled to the community via a retraction, the signal is critical and leads to an immediate and long-lived decline in citations … [This main finding] provides compelling evidence that the system of retractions is an important mode of governance, which alerts the scientific community to the presence of false knowledge, helping investigators of all varieties to avoid follow-on research predicated on false science, potentially saving tens of millions of dollars per year.
Before concluding that the present system for issuing and communicating retractions is working sufficiently well for science, industry, and the public, we should acknowledge the system’s great failings: Many publishers are reluctant to issue retractions without the author’s permission, lack the resources to properly investigate or defend allegations of misconduct, routinely issue ambiguous retraction notices (or none at all), and adhere poorly to established ethical guidelines. Improving any of these parts, even just a little, would greatly improve the self-correcting nature of science.