Article retractions — especially those reported in top journals– make headline news and its easy to understand why:
- They feed our insatiable desire for drama;
- Undermine faith in a process that is supposed to be immune to partisan politics and industry influence;
- Provide fodder for outrage on the misuse of public funds and poor government oversight; and
- Leave us feeling robbed from the social good that should come through advances in medical and scientific knowledge.
Reports that journal retractions are up dramatically leads us to ponder whether our collective trust in the institution of science should be called in to question.
On the other hand, a dramatic increase in retractions may simply point out that we are getting better at detecting scientific error and more determined about reporting it. Lacking the drama of the corruption narrative, the diligence narrative gets much less attention in the media.
Two recent papers appearing this year in the Journal of Medical Ethics provide some data to support both sides of the story.
The article,“Why and how do journals retract articles? An analysis of Medline retractions 1988-2008“ by Liz Wager and Peter Williams, reports on an analysis of retraction statements over time.
They report that retractions have increased by tenfold, from a tiny percentage of total publication (0.002%) in the early 1980s to a still tiny, but larger, percentage (0.02%) by the late 2000s. In real terms, the actual numbers are quite small compared to the volume of published literature. This is not to invalidate the severity of a retraction, only that retractions are rare events.
Retracting an article is serious business, as Wager and Williams write:
Retraction is one of the most serious sanctions journals can take against authors in cases of misconduct, and can cause permanent damage to reputations and academic careers. Therefore, retractions should be handled carefully and journals should have processes for deciding when and how to retract articles.
Fearing litigation, it’s not surprising that editors are generally hesitant to retract an article without the author’s permission. In 2009, the British publisher Emerald reversed its decision to retract an article based on evidence of substantial plagiarism. Threatened legal action was considered the reason for the reversal.
Barring litigation, investigating and retracting an article takes substantial time and resources that many editors and publishers may wish to simply ignore. On one occasion, I found myself stonewalled when I presented a clear case of duplicate publication to journal editors. On another, the editor simply deleted the article from the publisher’s database, and no public notice was issued.
In spite of clear guidelines for retraction, as developed by the Committee on Publication Ethics (COPE), journals do not follow uniform guidelines, report Wager and Williams. They report that some retractions fail to distinguish error from misconduct — the latter is much more egregious — or refuse to provide a reason at all. They write:
Some retraction statements appeared to use deliberately ambiguous wording which made it difficult to distinguish honest errors from suspected (or proven) misconduct.
The article,“Retractions in the scientific literature: is the incidence of research fraud increasing?“ by R. Grant Steen, provides a similar reporting and classification of retraction statements as Wager and Williams. Steen also reports on how publishers alert readers that an article was retracted, and the results are also inconsistent:
Summary of how the naïve reader is alerted to paper retraction (from Table 2)
- Watermark on PDF (41.1%)
- Journal website (33.4%)
- Not noted anywhere (31.8%)
- Note appended to PDF (17.3%)
- pdf deleted from website (13.2%)
It’s hard to get a sense from either of these papers on whether increasing rates of retraction is signaling that the system is getting worse (the corruption viewpoint) or better (the diligence viewpoint). There are arguments to support each side.
Increasing pressure among faculty to publish and providing monetary incentives to publish in high-impact journals, as recently reported in some Chinese universities, may increase the number of cases of misconduct. On the other hand, we also have powerful tools to more easily detect some cases of misconduct, like plagiarism and image manipulation. When these tools are used before publication (as in the case of CrossCheck), they can prevent problematic articles before they reach the published literature.
Given the great rewards that come from scientific publishing and the significant costs and risks that come from article retraction, my own sense is that we are viewing only the outcome of a tiny slice of seriously flawed papers made public and that increased pressure to adhere to ethical standards in publishing is just making this tiny slice appear a little larger.
Discussion
13 Thoughts on "Retract This Paper! Trends in Retractions Don't Reveal Clear Causes for Retractions"
The Web certainly makes it easier to come across instances of duplcation and plagiarism. I routinely use Google Scholar’s related articles feature, which uses term vector similarity across multiple journals, making the whole document the search term as it were. Identical or even rephrased language goes to the top of the list. Plus email makes it easy to report.
This suggests that the increase may be largely instrumental in origin, a common problem with enforcement statistics. But we really need better data on the reasons for retraction.
I’d wager that it is far easier to document and prove copyright infringement (the exact duplication of material without permission) than plagiarism (as the uncredited purloining of others’ ideas).
David is quite correct that the Web makes it a lot easier to detect instances of duplication and plagiarism. However, it could be argued that this should make article retractions less common, rather than more common, as the web tools for detecting the duplication and plagiarism can be employed at the peer review stage. Publishers can employ web tools for the detection during peer review.
At the recent ACRL conference, three librarians from the University of Missouri gave a paper on retracted publications in the biomedical literature. One startling discovery was that while the publisher may indicate retraction on its website, the article may still appear in its original form in an aggregator database, with no indication of retraction.
Mark Hester notes that plagiarism-detection software should make it possible to detect plagiarized material at submission, prior to publication. However, not all journals adopted this software when it first became available, so there may, even now, be easy journal targets for the plagiarist.
I think some editors are using plagiarism-detection software to examine papers that have already been published. This may explain why the lag between publication and retraction has been increasing lately: in 2000, only 4 papers were retracted and the longest time to retraction was 8 months; in 2009, 184 papers were retracted and the longest time to retraction was 117 months.
Many of these comments give an impression that a primary reason for retraction is plagiarism, which can be addressed through the new tools for detecting it. But more serious issues relate to matter such as misuse of data, including falsifying them, and unethical manipulation of images. Since these problems usually mean that the conclusions are not supported, they must be brought to the attention of the reader. In a litigious society, doing so may prove difficult.
In addition to the possibility of litigation and the drag on staff time and resources, in some cases there is the fear that retraction will reflect badly on the journal (editors and reviewers “taken in”). Given these pressures on decision making about retraction, it occurs to me that plagiarism, infringement, and misuse of data are seen, and treated, as problems of the affected journal rather than of the entire research community. I don’t have a solution in mind, but I’d say this is a tension worth examining.
Judy Holoviak is right in thinking that text plagiarism is not the primary reason for paper retraction; in a sample of 788 retracted papers, only about 14.4% were retracted for text plagiarism, while 28.2% were retracted for data fabrication or falsification.
I think that multiple retractions from the same journal does suggest lax editorial policies. As an example, Anesthesia & Analgesia has published at least 33 papers that were retracted in the last ~3 years (22 this year!).
To be fair to A&A, those 22 retractions are all the work of one author who failed to get proper IRB approval for his studies:
http://www.anesthesia-analgesia.org/content/112/5/1246.full
How does that change things?
Not only did Boldt fail to get IRB approval for 89 published studies, but he probably didn’t actually do many of them. The first paper retracted claimed to use a blood volume expander that had not been ordered by the hospital for about 10 years prior to study publication.
Not looking to give Boldt a pass, but thinking more about whether this is an example of widespread lax editorial policies or the result of one unscrupulous but convincing author.
If the journal had retracted 22 papers from 22 different authors this year alone, that might give me more worries about their editorial practices than finding one instance of a fraudulent author and retracting all his past publications.
Prior to this year, Anesth. Analg. had published 11 RCTs that were later retracted, which is about 15% of all RCTs that have ever been retracted. Dr. Scott Reuben accounted for 8 of those 11 retracted RCTs and Boldt accounted for just 1 (prior to 2011), which means that 2 other first authors were also involved.
That sounds lax to me….