The misbehavior of authors — one of the most intractable problems in scientific and scholarly publishing — reared its ugly head again last week, as SAGE revealed that it was retracting 60 papers after it detected a possible peer-review and citation ring built by leveraging false identities inserted into a reviewer database.
This is not the biggest retraction case in history. That record of 183 retractions belongs to Yoshitaka Fujii. The main tool used by these fraudsters — identity fraud — is also not new, having been used by Hyung-in Moon on his way to 60+ retractions.
The ring struck the aptly named Journal of Vibration and Control, which is devoted to studying perturbations and how to control them.
The statement by SAGE has a surreal aspect to it:
While investigating the JVC papers submitted and reviewed by Peter Chen, it was discovered that the author had created various aliases on SAGE Track, providing different email addresses to set up more than one account. Consequently, SAGE scrutinised further the co-authors of and reviewers selected for Peter Chen’s papers, these names appeared to form part of a peer review ring. The investigation also revealed that on at least one occasion, the author Peter Chen reviewed his own paper under one of the aliases he had created.
As many as 130 fake email accounts may have been involved, according to our friends at Retraction Watch, who have monitored the story closely.
This news started some emails flying around, with people wondering how or what could prevent things like this in the future. Could something validating peer review, like the new PRE-val system my company is launching, have helped? Could ORCID, which is built to disambiguate authors in all their roles, including editing and reviewing? Could CrossCheck, which is intended to flag large passages of text published previously?
It turns out that none of these would have much effect on the particular situation. Falsifying an ORCID is probably about as easy as falsifying any other form of online identity, especially with so many unclaimed ORCIDs in existence — once all legitimate authors are using them and all relevant papers are claimed, then it would be nearly bulletproof. Pretending to be an obscure author or three and assigning illicitly obtained ORCIDs to fake email addresses doesn’t seem all that difficult. Judging from the lengthy dispute procedures outlines on the ORCID site, it would take a while to clear up a discrepancy. I asked Howard Ratner about this, and he confirmed that until ORCID is more widely adopted, it doesn’t have enough data to provide a trusted barrier against exploitation by a devoted fraudster.
PRE-val would have scored these articles by trusting that the peer-review information was accurate and honestly derived. CrossCheck detects for plagiarism, not identity fraud.
The fact is that we trust our authors and reviewers, and there is no system currently available to stop things like this from occurring, beyond the risk of being caught, publicly humiliated, banished from academia, and so forth.
How high is the risk of ostracism even if an academic or researcher is caught cheating? Not that high, according to a study published last year, where the authors found that:
[w]rongdoing in research is relatively common with nearly all research-intensive institutions confronting cases over the past 2 years. Only 13% of respondents indicated that a case involved termination, despite the fact that more than 50% of the cases reported by RIOs [research integrity officers] involved FFP [fabrication, falsification, plagiarism]. This means that most investigators who engage in wrongdoing, even serious wrongdoing, continue to conduct research at their institutions.
Another problem with enforcement includes an inability to make institutions where fraud has occurred return grants and other funding secured in the process. This is all nicely spelled out in a New York Times op-ed by Adam Marcus and Ivan Oransky.
In the SAGE incident, the individual at the center of this has resigned from his post, and the editor of the Journal of Vibration and Control also resigned. That’s somewhat reassuring.
SAGE is to be commended on one hand for investigating this and pursuing the trail vigorously. Of course, prevention would have been better, and there is definitely going to be some damage to the journal and publisher brands.
The difficulty of detecting situations like this leads one to wonder whether this is the tip of an iceberg. After all, this is not completely unfamiliar author behavior, as noted above. In other instances of academic misbehavior, authors have added senior authors to papers without their permission and seen those papers published, editors have self-edited papers to publication in their own journals, and authors have published plagiarized papers.
It’s easy to blame authors, editors, and publishers for this, but as is so often the case, the incentives (their type and strength and positioning) are also an issue. Yet, whenever one of these situations arises, we rarely question the system itself — one that rewards publication to such a degree that people we’d expect to be able to trust due to their educational attainments and professional affiliations still feel the need to cheat, commit fraud, and exploit the trust economy of scholarly publishing.
While retractions account for 0.04% of papers published each year, the public perception issues and damage to the record is disproportionate, real, and regrettable. If we consider retractions in our industry to be comparable to the worst kinds of mistakes in other industries, we see that 0.04% is a high rate. For instance, if the airline industry had a 0.04% error rate, the United States alone would experience more than 100 plane crashes per month.
Despite initiatives that place other goals first, I believe our main challenge consists of how to maintain quality, trust, and integrity within the scientific record. Are we clever enough to answer this challenge?