"Time out, in the corner" by Ken Wilcox via. Flickr

The World Journal of Gastroenterology was off to a brilliant start. In 2000, the journal received its first impact factor, 0.993. The next year, in 2001, it increased to 1.445. In 2002, It increased again to 2.532 and then to 3.318 in 2003.

In 2004, something startling happened: they were de-listed from Thomson Reuters’ Journal Citation Reports (JCR).

The exceptional trajectory of the World Journal of Gastroenterology was being fueled not by recognition from outside authors, but by self-citation. More than 90% of citations involved in calculating their impact factor were coming from citations from its own papers, and if you remove these, the performance of the journal plummets to less than one-tenth of their official score.

The World Journal of Gastroenterology reappears in the JCR in 2008 with a respectable impact factor of 2.081, but this time, just 8% of the citations used to calculate its score came from the journal itself, a rate that has remained relatively constant through 2010.

Journal self-citation is a normal behavior of scientific authorship. Journals tend to concentrate articles by topic, and since scientific research builds on prior discovery, one would naturally expect some degree of journal self-citation. Detecting when self-citation is used purposefully to distort a journal’s performance, however, is much more tricky. One needs to show intent, such as a letter from the editor-in-chief requesting that authors cite more papers from the journal.

Even without a smoking gun, it’s still possible to detect odd patterns in the citation data.

In 2003, Marie McVeigh, Director of the JCR and Bibliographic Policy at Thomson Reuters, conducted a study to determine which factors may be driving journal self-citation. She reports that self-citation does not appear to be a characteristic of the size or subject of the journal but the behavior of individual journals. Remove self-citations from the impact factor calculation and most journals stay fixed in their positional ranking. Most, that is.

When McVeigh detects journals with impact factors that are derived largely from self-citations and whose ranking change drastically when self-citations are removed, she does what most parents do to their children when they start misbehaving — she puts them in “time-out.”

Time-out is my analogy to what happens for misbehaving journals. They are not kicked out of the JCR, only suspended for a short period of time (a couple of years typically) in order for them to get their house in order. It’s a time for the journal editors to contemplate the consequences of their behavior — if they are willing to behave themselves and play well with others, they are allowed back to play. No one is forcing the journal to change its citation practices or policies, but they do need to exhibit acceptable behavior if they want to be listed in the JCR.

This year, 33 journals were suspended for “extremely high journal self-citation rates,” according to the JCR notices file [subscription required]. Many titles are re-listed within a couple of years when they get their self-citation rates under control. For example:

  • The Asian-Australasian Journal of Animal Sciences was suspended between 2008-2009. In 2007, 78% of the citations used to determine its impact factor were self-citations. When they returned to the JCR in 2010, this number dropped to 18%.
  • 96% of citations contributing to Cereal Research Communications‘ impact factor in 2007 were self-citations. After being de-listed for two years, they returned to the ranking in 2010 with a self-citation rate of just 10%.

Since many institutions consider the impact factor as a proxy for scientific merit — some even willing to compensate their authors with cash bonuses based on the impact factor of the journal  — it’s not surprising that journal editors are taking the possibility of time-out very seriously. If the impact factor is but a game, it is one that is played with serious consequences for misconduct. From Thomson Reuter’s standpoint, punishing those who abuse the system for their own advancement is a way of protecting the validity of the impact factor as a measure of journal prestige.

Responding in 2008 to a widely-voiced claim that impact factors are manipulated and abused by journal editors, James Testa, VP Editorial Development & Publisher Relations at Thomson Reuters writes:

Thomson Reuters […] reviews self-citation data for journals in which an exceptionally high self-citation rate artificially influences the impact factor and therefore belies its contribution to the scientific literature. The role of a journal’s impact factor as an objective and integral measure becomes questionable at this level of self-citation.

From a theoretical perspective, this argument makes perfect sense; from a practical standpoint, it’s unknown where the line between “acceptable” and “unacceptable” self-citation behavior is drawn. Perhaps this ambiguity weighs in Thomson Reuters’ favor, for if they state a limit, you can imagine that some editors may keep a running tally of citations to make sure that they approach, but not exceed, this arbitrary line. The fear that one’s actions are constantly being monitored, like a Panopticon, may be effective in keeping everyone on their best behavior.

I asked James Testa by phone about where the arbitrary line of acceptability is drawn. “We don’t know where that line is, but we know [who has crossed it] when we see it,” he responded. “For example, self-citation becomes a problem for us when it significantly changes the journal’s rank in its JCR subject category” he added. Testa believes that the fear of being de-listed may have reduced the incidence of questionable practices, such as when an editor writes an editorial and cites every paper that was published in the last three years.

Not wanting to be perceived as a citation cop, Testa emphasized that the main purpose of looking for abuses of self-citation is to maintain the integrity of the impact factor. “We don’t infer motive, we report behavior,” he said. “The particular cause for journal self-citation is less important than its effect.”

Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist. https://phil-davis.com/

Discussion

28 Thoughts on "Gaming the Impact Factor Puts Journal In Time-out"

Thanks for the article. It surprises and saddens me that scholarly publishing hasn’t move on from the traditional impact factor to something at the article level. CrossRef’s “cited-by” linking for example seems like such a more useful measure of impact as well as tool for researchers. It’s at the article level and shows exactly what articles are citing the particular article. Self-citing at a journal or an author level is obvious in such a measure.

If you want to simplify it to counts that can easily be done. If one really needs impact at the journal level it is always possible to aggregate data from article level measures to journal level measures. Cite-by is also open to any journal that is willing to join CrossRef and assign DOIs rather than Thompson Reuters voodoo magic approach for selecting journals for inclusion in the JCR.

I don’t think it’s the publishing end of things that has failed to move on, but more the academia side, at least those who use the Impact Factor for career advancement and funding decisions. Most, if not all publishers, would quickly adopt whatever metrics their readers choose as meaningful and important. As of now, that’s the Impact Factor.

Yes, you are correct. It is really academics that is to blame. I was thinking of scholarly publishing in a broad sense but clearly it’s us in academia not publishers who are to blame.

Article-level citation metrics are useful but they tell you nothing about the article when it is just published. Would a promotion and tenure committee be asked to wait 3-10 years for a candidate’s portfolio of publications to age in order to make a decision?

I spent some time recently with an editor of a prominent history journal, and he noted that in his journal, an article’s citations tend to peak around 5 years after publication.

As I am sure you know, the JCR is only based on 2 years worth of data, not 3 – 10 years. Promotion and tenure decisions are usually based on 6 or 7 year’s worth of a candidate’s scholarly activities. Sometimes people go up for promotion early but in my experience it is not often and I believe some universities are moving to even longer review periods.

Just because an article is in a “high impact” journals does not mean it will be read and have influence on other’s research. There are also examples of articles in journals that are do not have a high impact score or even rated in the JCR that end up being seminal articles.

From having gone through the process myself and sat on a number of AP&T reviews as both a committee member and chair, article level data of the type provided by cited-by would be far more useful than just counting publications and weighting them by the impact factor of the journals in which they were published. Hopefully the committee members actually read some of the candidate’s work but getting some assessment of a candidate’s impact on their field is helpful. I just feel article level metrics are much more useful even if you can’t get good data on the last publication or two they produced. And again this comes from experience.

David, I think we have some misunderstanding. I’m not arguing that a journal’s Impact Factor should stand for a single and definitive metric that encapsulates the value of one’s contribution to science. But in the absence of long-term metrics, the aggregate of where one finds an author’s work tells us something about that author.

Actually, the JCR reports 2 year Impact Factor, 5 year Impact Factor, Journal Self-Cites and two Eigenfactor metrics (Eignefactor Score and Article Influence Score).

Yes, they report that all that. What do publishers report? The 2 year impact factor. If you go to the ISI website, what is reported as the “Impact Factor”? The one that is based on 2 year’s of data..

So what you’re saying is that the way to game the Impact Factor is to publish multiple journals and have them cite one another…

Thanks for this further information though, on how Thomson-Reuters does do some safeguarding against gaming. Often when discussing metrics, I hear that citation is gameable, but have yet to see any terribly effective strategies. I’ve heard tale of “citation circles” but never encountered one in the wild either as an editor or as a publishing scientist.

Interesting comment. If you have multiple journals that cite each other, Thomson Reuters, as far as I know, does not look at this on a publisher level.

Yes, manipulation of the impact factor can be carried out very easily without journal self-citation through the cooperation of journals. For example, the International Journal of Nonlinear Sciences and Numerical Simulation garnered hundreds of citations from single volumes of other journals for which the IJNSNS editor-in-chief was acting as guest editor. Details at
http://umn.edu/~arnold/papers/impact-factors.pdf

Time-outs for self-citation is applying a band-aid to a hopelessly injured patient.

Would be interesting to know all the journals on the “time out” list…can you post that? I remember a few years back that several urology editors were up in arms regarding two specific journals (in that field) that were allegedly asking their authors to add specific citations. the result of course was a large rise in IF and of course self citations. I don’t recall that the journals were ever delisted, though. One journal’s response was that it covered a niche subject and the journal was well recognized thus the authors couldn’t help but cite the journal. They didn’t respond to the allegation of asking authors to add citations, though.

The current list of journals suppressed from the JCR is posted in the product Notices file. Once a journal is restored to the JCR coverage, it is removed from the Notices file, but its 5-year trend graph will have “empty” spaces for the years of suppression. For those years, the journal does not have a Journal Impact Factor calculated.

In the 2004 journal self-citation study Phil mentioned, we didn’t find a strong association between a small number of journals in a subject and a high journal self-citation rate. Niche publications are not necessarily isolated from the more general literature.

Suppression is a last resort, really. Because we have made the contribution of journal self-citation increasingly visible in the product itself, suppression is only for extreme situations.

Dear Phil,

You might be interested in the following letter: Mannino, D. M. (2005). Impact Factor, Impact, and smoke and mirrors. American Journal of Respiratory and critical care medicine 171(4): 417-418, to support your case for systematic and inflatory self citations encouraged by journal editorial boards.

Comments are closed.