From an economics standpoint, self-citation is the easiest method to boost one’s citations. Every author knows this and cites his own articles, however peripheral their relationship is to the topic at hand. Editors know this as well, and some have been caught coercing authors into self-citing the journal. Other editors have published editorial “reviews” of the articles published in their own journal, focusing entirely on papers that have been published in the previous two years — the window from which the impact factor is generated.
There is a price to pay for this behavior, especially when it is done to excess. Thomson Reuters, publishers of the annual Journal Citation Report (JCR), routinely puts journals in “time-out” when their self-citation rates are excessively high, such that they greatly shift the journal’s positional rank among other related titles.
There is another citation gaming tactic that is much more pernicious and difficult to detect. It is the citation cartel.
In a 1999 essay published in Science titled, “Scientific Communication — A Vanity Fair?” George Franck warned us on the possibility of citation cartels — groups of editors and journals working together for mutual benefit. To date, this behavior has not been widely documented; however, when you first view it, it is astonishing.
Cell Transplantation is a medical journal published by the Cognizant Communication Corporation of Putnam Valley, New York. In recent years, its impact factor has been growing rapidly. In 2006, it was 3.482. In 2010, it had almost doubled to 6.204.
When you look at which journals cite Cell Transplantation, two journals stand out noticeably: the Medical Science Monitor, and The Scientific World Journal. According to the JCR, neither of these journals cited Cell Transplantation until 2010.
Then, in 2010, a review article was published in the Medical Science Monitor citing 490 articles, 445 of which were to papers published in Cell Transplantation. All 445 citations pointed to papers published in 2008 or 2009 — the citation window from which the journal’s 2010 impact factor was derived. Of the remaining 45 citations, 44 cited the Medical Science Monitor, again, to papers published in 2008 and 2009.
Three of the four authors of this paper sit on the editorial board of Cell Transplantation. Two are associate editors, one is the founding editor. The fourth is the CEO of a medical communications company.
In the same year, 2010, two of these editors also published a review article in The Scientific World Journal citing 124 papers, 96 of which were published in Cell Transplantation in 2008 and 2009. Of the 28 remaining citations, 26 were to papers published in The Scientific World Journal in 2008 and 2009. We are beginning to see a pattern. This is what it looks like:
The two review articles described above contributed a total of 541 citations toward the calculation of the Cell Transplantation‘s 2010 impact factor. Remove them and the journal’s impact factor drops from 6.204 to 4.082.
The editors of Cell Transplantation have continued this practice through 2011, with two additional reviews: The first appears in the Medical Science Monitor, with 87 citations to Cell Transplantation and 32 citations to the Medical Science Monitor, all to papers published in 2009 or 2010. The second review appears in The Scientific World Journal, containing 109 citations to Cell Transplantation and 29 citations to The Scientific World Journal, all of which were published in the same two-year window from which the 2011 impact factor score will be derived.
In 2011, the editors of Cell Transplantation also published a similar review article in their own journal, citing a smaller sister journal, Technology and Innovation, 25 times — 24, of which were published in 2010. The remaining references cite Cell Transplantation papers published in 2009 or 2010. The lead author of the article is the Editor-in-Chief of Technology and innovation; the last author is co-editor of the journal.
From a strategic standpoint, placing self-referential papers in a cooperating journal is a cheap and effective strategy if the goal is to boost one’s impact factor. For an article processing fee of just $1,100 (Medical Science Monitor), the editors were able to return 445 impact factor-contributing citations to their journal. Best of all, this kind of behavior is difficult to track.
The JCR provides citing, and cited-by, matrices for all of the journals they index; however, these data exist only in their aggregate and are not linked to specific articles. It was only seeing very large numbers amidst a long string of zeros that I was alerted to something odd going on — that and a tip from a concerned scientist. Identifying these papers required me to do some fancy cited-by searching in the Web of Science. The data are there, but they are far from transparent.
The ease to which members of an editorial board were able to use a cartel of journals to influence their journal’s impact factor concerns me greatly because the cost to do so is very low, the rewards are astonishingly high, it is difficult to detect, and the practice can be facilitated very easily by overlapping editorial boards or through cooperative agreements between them. What’s more, editors can protect these “reviews” from peer review if they are labeled as “editorial material,” as some are. It’s the perfect strategy for gaming the system.
For all these reasons, I’m particularly concerned that of all the strategies unscrupulous editors employ to boost their journal rankings, the formation of citation cartels is the one that could do the most harm to the citation as a scientific indicator. Because of the way it operates, it has the potential of creating a citation bubble very, very quickly. If you don’t agree with how some editors are using citation cartels, you may change your mind in a year or two as your own title languishes behind that of your competitors.
Unlike self-citation, which is very easy to detect, Thomson Reuters has no algorithm to detect citation cartels, nor a stated policy to help keep this shady behavior at bay.
One way to detect an offending paper would be to look at the share of impact factor-contributing references directed toward a single journal. Computationally, this may be the easiest step. Determining how much influence is excessive and under what circumstances will ultimately be the biggest challenge.
Science editors need to discuss how to deal with this issue. If disciplinary norms and decorum cannot keep this kind of behavior at bay, the threat of being delisted from the JCR may be necessary.