Authority, Controversial Topics, Metrics and Analytics

Gaming the Impact Factor Puts Journal In Time-out

"Time out, in the corner" by Ken Wilcox via. Flickr

The World Journal of Gastroenterology was off to a brilliant start. In 2000, the journal received its first impact factor, 0.993. The next year, in 2001, it increased to 1.445. In 2002, It increased again to 2.532 and then to 3.318 in 2003.

In 2004, something startling happened: they were de-listed from Thomson Reuters’ Journal Citation Reports (JCR).

The exceptional trajectory of the World Journal of Gastroenterology was being fueled not by recognition from outside authors, but by self-citation. More than 90% of citations involved in calculating their impact factor were coming from citations from its own papers, and if you remove these, the performance of the journal plummets to less than one-tenth of their official score.

The World Journal of Gastroenterology reappears in the JCR in 2008 with a respectable impact factor of 2.081, but this time, just 8% of the citations used to calculate its score came from the journal itself, a rate that has remained relatively constant through 2010.

Journal self-citation is a normal behavior of scientific authorship. Journals tend to concentrate articles by topic, and since scientific research builds on prior discovery, one would naturally expect some degree of journal self-citation. Detecting when self-citation is used purposefully to distort a journal’s performance, however, is much more tricky. One needs to show intent, such as a letter from the editor-in-chief requesting that authors cite more papers from the journal.

Even without a smoking gun, it’s still possible to detect odd patterns in the citation data.

In 2003, Marie McVeigh, Director of the JCR and Bibliographic Policy at Thomson Reuters, conducted a study to determine which factors may be driving journal self-citation. She reports that self-citation does not appear to be a characteristic of the size or subject of the journal but the behavior of individual journals. Remove self-citations from the impact factor calculation and most journals stay fixed in their positional ranking. Most, that is.

When McVeigh detects journals with impact factors that are derived largely from self-citations and whose ranking change drastically when self-citations are removed, she does what most parents do to their children when they start misbehaving — she puts them in “time-out.”

Time-out is my analogy to what happens for misbehaving journals. They are not kicked out of the JCR, only suspended for a short period of time (a couple of years typically) in order for them to get their house in order. It’s a time for the journal editors to contemplate the consequences of their behavior — if they are willing to behave themselves and play well with others, they are allowed back to play. No one is forcing the journal to change its citation practices or policies, but they do need to exhibit acceptable behavior if they want to be listed in the JCR.

This year, 33 journals were suspended for “extremely high journal self-citation rates,” according to the JCR notices file [subscription required]. Many titles are re-listed within a couple of years when they get their self-citation rates under control. For example:

  • The Asian-Australasian Journal of Animal Sciences was suspended between 2008-2009. In 2007, 78% of the citations used to determine its impact factor were self-citations. When they returned to the JCR in 2010, this number dropped to 18%.
  • 96% of citations contributing to Cereal Research Communications‘ impact factor in 2007 were self-citations. After being de-listed for two years, they returned to the ranking in 2010 with a self-citation rate of just 10%.

Since many institutions consider the impact factor as a proxy for scientific merit — some even willing to compensate their authors with cash bonuses based on the impact factor of the journal  — it’s not surprising that journal editors are taking the possibility of time-out very seriously. If the impact factor is but a game, it is one that is played with serious consequences for misconduct. From Thomson Reuter’s standpoint, punishing those who abuse the system for their own advancement is a way of protecting the validity of the impact factor as a measure of journal prestige.

Responding in 2008 to a widely-voiced claim that impact factors are manipulated and abused by journal editors, James Testa, VP Editorial Development & Publisher Relations at Thomson Reuters writes:

Thomson Reuters [...] reviews self-citation data for journals in which an exceptionally high self-citation rate artificially influences the impact factor and therefore belies its contribution to the scientific literature. The role of a journal’s impact factor as an objective and integral measure becomes questionable at this level of self-citation.

From a theoretical perspective, this argument makes perfect sense; from a practical standpoint, it’s unknown where the line between “acceptable” and “unacceptable” self-citation behavior is drawn. Perhaps this ambiguity weighs in Thomson Reuters’ favor, for if they state a limit, you can imagine that some editors may keep a running tally of citations to make sure that they approach, but not exceed, this arbitrary line. The fear that one’s actions are constantly being monitored, like a Panopticon, may be effective in keeping everyone on their best behavior.

I asked James Testa by phone about where the arbitrary line of acceptability is drawn. “We don’t know where that line is, but we know [who has crossed it] when we see it,” he responded. “For example, self-citation becomes a problem for us when it significantly changes the journal’s rank in its JCR subject category” he added. Testa believes that the fear of being de-listed may have reduced the incidence of questionable practices, such as when an editor writes an editorial and cites every paper that was published in the last three years.

Not wanting to be perceived as a citation cop, Testa emphasized that the main purpose of looking for abuses of self-citation is to maintain the integrity of the impact factor. “We don’t infer motive, we report behavior,” he said. “The particular cause for journal self-citation is less important than its effect.”

About Phil Davis

I am an independent researcher and publishing consultant specializing in the statistical analysis of readership and citation data. I am a former postdoctoral researcher in science communication and former science librarian. http://phil-davis.org/

Discussion

28 thoughts on “Gaming the Impact Factor Puts Journal In Time-out

  1. Thanks for the article. It surprises and saddens me that scholarly publishing hasn’t move on from the traditional impact factor to something at the article level. CrossRef’s “cited-by” linking for example seems like such a more useful measure of impact as well as tool for researchers. It’s at the article level and shows exactly what articles are citing the particular article. Self-citing at a journal or an author level is obvious in such a measure.

    If you want to simplify it to counts that can easily be done. If one really needs impact at the journal level it is always possible to aggregate data from article level measures to journal level measures. Cite-by is also open to any journal that is willing to join CrossRef and assign DOIs rather than Thompson Reuters voodoo magic approach for selecting journals for inclusion in the JCR.

    Posted by David Solomon | Oct 17, 2011, 9:16 am
    • I don’t think it’s the publishing end of things that has failed to move on, but more the academia side, at least those who use the Impact Factor for career advancement and funding decisions. Most, if not all publishers, would quickly adopt whatever metrics their readers choose as meaningful and important. As of now, that’s the Impact Factor.

      Posted by David Crotty | Oct 17, 2011, 9:42 am
      • Yes, you are correct. It is really academics that is to blame. I was thinking of scholarly publishing in a broad sense but clearly it’s us in academia not publishers who are to blame.

        Posted by David Solomon | Oct 17, 2011, 12:46 pm
    • Article-level citation metrics are useful but they tell you nothing about the article when it is just published. Would a promotion and tenure committee be asked to wait 3-10 years for a candidate’s portfolio of publications to age in order to make a decision?

      Posted by Phil Davis | Oct 17, 2011, 10:25 am
      • I spent some time recently with an editor of a prominent history journal, and he noted that in his journal, an article’s citations tend to peak around 5 years after publication.

        Posted by David Crotty | Oct 17, 2011, 10:27 am
      • As I am sure you know, the JCR is only based on 2 years worth of data, not 3 – 10 years. Promotion and tenure decisions are usually based on 6 or 7 year’s worth of a candidate’s scholarly activities. Sometimes people go up for promotion early but in my experience it is not often and I believe some universities are moving to even longer review periods.

        Just because an article is in a “high impact” journals does not mean it will be read and have influence on other’s research. There are also examples of articles in journals that are do not have a high impact score or even rated in the JCR that end up being seminal articles.

        From having gone through the process myself and sat on a number of AP&T reviews as both a committee member and chair, article level data of the type provided by cited-by would be far more useful than just counting publications and weighting them by the impact factor of the journals in which they were published. Hopefully the committee members actually read some of the candidate’s work but getting some assessment of a candidate’s impact on their field is helpful. I just feel article level metrics are much more useful even if you can’t get good data on the last publication or two they produced. And again this comes from experience.

        Posted by David Solomon | Oct 17, 2011, 12:22 pm
        • David, I think we have some misunderstanding. I’m not arguing that a journal’s Impact Factor should stand for a single and definitive metric that encapsulates the value of one’s contribution to science. But in the absence of long-term metrics, the aggregate of where one finds an author’s work tells us something about that author.

          Posted by Phil Davis | Oct 17, 2011, 12:36 pm
        • Actually, the JCR reports 2 year Impact Factor, 5 year Impact Factor, Journal Self-Cites and two Eigenfactor metrics (Eignefactor Score and Article Influence Score).

          Posted by Kate McCain | Oct 17, 2011, 4:42 pm
          • Yes, they report that all that. What do publishers report? The 2 year impact factor. If you go to the ISI website, what is reported as the “Impact Factor”? The one that is based on 2 year’s of data..

            Posted by David Solomon | Oct 17, 2011, 6:27 pm
  2. So what you’re saying is that the way to game the Impact Factor is to publish multiple journals and have them cite one another…

    Thanks for this further information though, on how Thomson-Reuters does do some safeguarding against gaming. Often when discussing metrics, I hear that citation is gameable, but have yet to see any terribly effective strategies. I’ve heard tale of “citation circles” but never encountered one in the wild either as an editor or as a publishing scientist.

    Posted by David Crotty | Oct 17, 2011, 9:40 am
    • Interesting comment. If you have multiple journals that cite each other, Thomson Reuters, as far as I know, does not look at this on a publisher level.

      Posted by Angela | Oct 17, 2011, 10:48 am
    • Yes, manipulation of the impact factor can be carried out very easily without journal self-citation through the cooperation of journals. For example, the International Journal of Nonlinear Sciences and Numerical Simulation garnered hundreds of citations from single volumes of other journals for which the IJNSNS editor-in-chief was acting as guest editor. Details at

      http://umn.edu/~arnold/papers/impact-factors.pdf

      Time-outs for self-citation is applying a band-aid to a hopelessly injured patient.

      Posted by Doug Arnold | Oct 24, 2011, 12:58 pm
  3. Would be interesting to know all the journals on the “time out” list…can you post that? I remember a few years back that several urology editors were up in arms regarding two specific journals (in that field) that were allegedly asking their authors to add specific citations. the result of course was a large rise in IF and of course self citations. I don’t recall that the journals were ever delisted, though. One journal’s response was that it covered a niche subject and the journal was well recognized thus the authors couldn’t help but cite the journal. They didn’t respond to the allegation of asking authors to add citations, though.

    Posted by RJJ | Oct 17, 2011, 9:56 am
  4. The current list of journals suppressed from the JCR is posted in the product Notices file. Once a journal is restored to the JCR coverage, it is removed from the Notices file, but its 5-year trend graph will have “empty” spaces for the years of suppression. For those years, the journal does not have a Journal Impact Factor calculated.

    In the 2004 journal self-citation study Phil mentioned, we didn’t find a strong association between a small number of journals in a subject and a high journal self-citation rate. Niche publications are not necessarily isolated from the more general literature.

    Suppression is a last resort, really. Because we have made the contribution of journal self-citation increasingly visible in the product itself, suppression is only for extreme situations.

    Posted by Marie @ Thomson Reuters | Oct 17, 2011, 10:59 am
  5. Dear Phil,

    You might be interested in the following letter: Mannino, D. M. (2005). Impact Factor, Impact, and smoke and mirrors. American Journal of Respiratory and critical care medicine 171(4): 417-418, to support your case for systematic and inflatory self citations encouraged by journal editorial boards.

    Posted by WoW!ter (@Wowter) | Oct 17, 2011, 2:47 pm

Trackbacks/Pingbacks

  1. Pingback: Om censurerade impact-faktorer | Metrics - Oct 17, 2011

  2. Pingback: Tweets, and Our Obsession with Alt Metrics « The Scholarly Kitchen - Jan 4, 2012

  3. Pingback: When Journal Editors Coerce Authors to Self-Cite « The Scholarly Kitchen - Feb 2, 2012

  4. Pingback: The Emergence of a Citation Cartel « The Scholarly Kitchen - Apr 10, 2012

  5. Pingback: Influencing Journal Impact Factor – Citation Cartels « THL News Blog - May 3, 2012

  6. Pingback: The Black Market for Facebook “Likes,” and What It Means for Citations and Alt-Metrics « The Scholarly Kitchen - May 18, 2012

  7. Pingback: The Black Market for Facebook “Likes,” and What It Means for Citations and Alt-Metrics | Transformative learning - May 18, 2012

  8. Pingback: Citation Cartel Journals Denied 2011 Impact Factor « The Scholarly Kitchen - Jun 29, 2012

  9. Pingback: Nature News Blog: Record number of journals banned for boosting impact factor with self-citations : Nature News Blog - Jun 29, 2012

  10. Pingback: Record number of journals banned for boosting impact factor with self-citations | Netarum News - Jun 29, 2012

  11. Pingback: A first? Papers retracted for citation manipulation « Retraction Watch - Jul 5, 2012

  12. Pingback: Want to increase your Impact Factor? « Mind Games 2.0 - Jul 11, 2012

  13. Pingback: Gaming Google Scholar Citations, Made Simple and Easy « The Scholarly Kitchen - Dec 12, 2012

Side Dishes by Stewart Wills

Find Posts by Category

Find Posts by Date

October 2011
S M T W T F S
« Sep   Nov »
 1
2345678
9101112131415
16171819202122
23242526272829
3031  

The Scholarly Kitchen on Twitter

SSP_LOGO
The mission of the Society for Scholarly Publishing (SSP) is "[t]o advance scholarly publishing and communication, and the professional development of its members through education, collaboration, and networking." SSP established The Scholarly Kitchen blog in February 2008 to keep SSP members and interested parties aware of new developments in publishing.
......................................
The Scholarly Kitchen is a moderated and independent blog. Opinions on The Scholarly Kitchen are those of the authors. They are not necessarily those held by the Society for Scholarly Publishing nor by their respective employers.
Follow

Get every new post delivered to your Inbox.

Join 13,697 other followers

%d bloggers like this: