The World Journal of Gastroenterology was off to a brilliant start. In 2000, the journal received its first impact factor, 0.993. The next year, in 2001, it increased to 1.445. In 2002, It increased again to 2.532 and then to 3.318 in 2003.
In 2004, something startling happened: they were de-listed from Thomson Reuters’ Journal Citation Reports (JCR).
The exceptional trajectory of the World Journal of Gastroenterology was being fueled not by recognition from outside authors, but by self-citation. More than 90% of citations involved in calculating their impact factor were coming from citations from its own papers, and if you remove these, the performance of the journal plummets to less than one-tenth of their official score.
The World Journal of Gastroenterology reappears in the JCR in 2008 with a respectable impact factor of 2.081, but this time, just 8% of the citations used to calculate its score came from the journal itself, a rate that has remained relatively constant through 2010.
Journal self-citation is a normal behavior of scientific authorship. Journals tend to concentrate articles by topic, and since scientific research builds on prior discovery, one would naturally expect some degree of journal self-citation. Detecting when self-citation is used purposefully to distort a journal’s performance, however, is much more tricky. One needs to show intent, such as a letter from the editor-in-chief requesting that authors cite more papers from the journal.
Even without a smoking gun, it’s still possible to detect odd patterns in the citation data.
In 2003, Marie McVeigh, Director of the JCR and Bibliographic Policy at Thomson Reuters, conducted a study to determine which factors may be driving journal self-citation. She reports that self-citation does not appear to be a characteristic of the size or subject of the journal but the behavior of individual journals. Remove self-citations from the impact factor calculation and most journals stay fixed in their positional ranking. Most, that is.
When McVeigh detects journals with impact factors that are derived largely from self-citations and whose ranking change drastically when self-citations are removed, she does what most parents do to their children when they start misbehaving — she puts them in “time-out.”
Time-out is my analogy to what happens for misbehaving journals. They are not kicked out of the JCR, only suspended for a short period of time (a couple of years typically) in order for them to get their house in order. It’s a time for the journal editors to contemplate the consequences of their behavior — if they are willing to behave themselves and play well with others, they are allowed back to play. No one is forcing the journal to change its citation practices or policies, but they do need to exhibit acceptable behavior if they want to be listed in the JCR.
This year, 33 journals were suspended for “extremely high journal self-citation rates,” according to the JCR notices file [subscription required]. Many titles are re-listed within a couple of years when they get their self-citation rates under control. For example:
- The Asian-Australasian Journal of Animal Sciences was suspended between 2008-2009. In 2007, 78% of the citations used to determine its impact factor were self-citations. When they returned to the JCR in 2010, this number dropped to 18%.
- 96% of citations contributing to Cereal Research Communications‘ impact factor in 2007 were self-citations. After being de-listed for two years, they returned to the ranking in 2010 with a self-citation rate of just 10%.
Since many institutions consider the impact factor as a proxy for scientific merit — some even willing to compensate their authors with cash bonuses based on the impact factor of the journal — it’s not surprising that journal editors are taking the possibility of time-out very seriously. If the impact factor is but a game, it is one that is played with serious consequences for misconduct. From Thomson Reuter’s standpoint, punishing those who abuse the system for their own advancement is a way of protecting the validity of the impact factor as a measure of journal prestige.
Thomson Reuters […] reviews self-citation data for journals in which an exceptionally high self-citation rate artificially influences the impact factor and therefore belies its contribution to the scientific literature. The role of a journal’s impact factor as an objective and integral measure becomes questionable at this level of self-citation.
From a theoretical perspective, this argument makes perfect sense; from a practical standpoint, it’s unknown where the line between “acceptable” and “unacceptable” self-citation behavior is drawn. Perhaps this ambiguity weighs in Thomson Reuters’ favor, for if they state a limit, you can imagine that some editors may keep a running tally of citations to make sure that they approach, but not exceed, this arbitrary line. The fear that one’s actions are constantly being monitored, like a Panopticon, may be effective in keeping everyone on their best behavior.
I asked James Testa by phone about where the arbitrary line of acceptability is drawn. “We don’t know where that line is, but we know [who has crossed it] when we see it,” he responded. “For example, self-citation becomes a problem for us when it significantly changes the journal’s rank in its JCR subject category” he added. Testa believes that the fear of being de-listed may have reduced the incidence of questionable practices, such as when an editor writes an editorial and cites every paper that was published in the last three years.
Not wanting to be perceived as a citation cop, Testa emphasized that the main purpose of looking for abuses of self-citation is to maintain the integrity of the impact factor. “We don’t infer motive, we report behavior,” he said. “The particular cause for journal self-citation is less important than its effect.”
- F1000 Journal Rankings – The Map Is Not the Territory
- The Journal Usage Factor – Think Locally, Act Locally