Details
"Details" by mag3737 via Flickr

In June we celebrate Father’s Day and the official start of summer. Students leave for break, and academics welcome the time to write or conduct field research.

For journal editors and publishers, however, June can be an anxious month as they collectively wait for the release of the 2010 Journal Citation Reports (JCR), and within it, the impact factor — a single numerical figure that is considered by many to reflect the standing of a title and its relative ranking against its competitors.

The journal impact factor is deceptively simple. A journal’s 2010 impact factor is derived by summing the citations made in 2010 to articles published during the previous two years (i.e,. 2008 and 2009) and dividing this number by the sum of the articles published during those two years.

I say “deceptively simple” because there is a whole litany of caveats that influence how this calculation is made, which is why those who attempt to calculate their own impact factor using the Citation Report feature of Web of Science (WoS), often arrive at different results than the figure published in the JCR. Iain Craig, Senior Manager, Market Research & Analysis at Wiley-Blackwell, helped me understand why:

  1. WoS and JCR use different citation matching protocols. WoS relies on matching citing articles to cited articles. To do that, it requires either a DOI or enough information to make a credible match. An error in the author, volume or page numbers may result in a missed citation. WoS does attempt to correct for errors if there is a close match, however. In contrast, all that is required to register a citation in JCR is the name of the journal and the publication year. With a lower bar of accuracy required to make a match, it is more likely that JCR will pick up citations that are not registered in WoS.
  2. WoS and JCR use different citation windows. WoS’s Citation Report will register citations when they are indexed, not when they are published. If a December 2009 issue is indexed in January 2010, then the citations will be counted as being made in 2010, not 2009. In comparison, JCR counts citations by publication year. For large journals, this discrepancy is not normally an issue as a citation gain at the beginning of the cycle ends up being balanced by the omission of citations at the end of the cycle. For smaller journals that publish infrequently, the addition or omission of a single issue may make a significant difference in that year’s Impact Factor.
  3. WoS is dynamic, while the JCR is static. In order to calculate journal impact factors, Thomson Reuters takes an extract of their dataset in March, whether or not it has received and indexed all journal content from the prior year. In comparison, WoS continues to index as issues are received.
  4. Differences in indexing. Not all journal content is indexed in WoS. For example, a journal issue containing conference abstracts may not show up in the WoS dataset, but citations to these abstracts may count toward calculating a journal’s Impact Factor. In addition, the definition of what is considered an “article” is often a source of controversy for journal editors. Citations from and to editorials and letters are counted toward what goes in the numerator of the impact factor, but may escape being added in the number of articles that comprise the denominator.

There are several other caveats to understanding a journal’s impact factor, but it will suffice to state that its calculation is not as simple as its definition implies and that the devil is always lurking in the details.

Enhanced by Zemanta
Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist. https://phil-davis.com/

Discussion

14 Thoughts on "Anticipating the 2010 Impact Factor — The Devil Is in the Details"

What is most curious, but goes unmentioned in your article is the puzzle of *why* the two databases differ. Both are owned and maintained by Thomson Reuters. There may be a historical explanation, but that is of no interest now. They should decide on an optimal citation matching algorithm and use it consistently and dynamically for both purposes.

For those readers interested in more detail on how ISI calculates the numerator and denominator of the Journal Impact Factor, please see the following articles:

Hubbard SC, McVeigh ME. Casting a wide net: the Journal Impact Factor numerator. Learned Publishing. 2011;24(2):133-7. http://dx.doi.org/10.1087/20110208

McVeigh ME, Mann SJ. The Journal Impact Factor Denominator: Defining Citable (Counted) Items. JAMA. 2009;302(10):1107-9. http://dx.doi.org/10.1001/jama.2009.1301

Do any one know when is the new JCR impact factor list for year 2010 is coming out?

Rumor has it that it’s due out June 28th, but these rumors are sometimes wrong.

And indeed, today (June 28) Thomson Reuters made an announcement that they released the 2010 JCR, and that it is available right now. However, when I log into the Web of Knowledge (my institution has a subscription and I have a
remote access), and then click on the Journal Citation Reports link, what I see is that the latest release online is 2009. So, where is it…? Do they need some additional time to put it online ACTUALLY?

I am told that the 2010 data will be released at 1:30pm Eastern Standard Time (EST).

Wow, thanx for the tip! That would be 7:30pm CET (where I am currently); can’t wait… Anyway, best regards.

There is no ISI, they were bought out by T-R and changed to Journal Citation Reports.
1) Doesn’t consider journals in languages other than english.
2) Does not include journals based JUST on quality, it is a business and currently is trying to expand to many third world countries. So, they are including some really bad journals just to expand market share at the cost of excluding some much better journals in regions where they have a firm grasp.
3) The method for including journals is not entirely or even partially transparent. Much is a company secret that will not be released. Try asking for guidelines, they give you a short list of non-specific things which are frequently ignored based on market-share decisions.
4) The impact rating is only one metric that is available. For another, you have h-score, m-quotient, g-score, and a variety of twists on these. Also, you can calculate this for journals and for researchers. I would argue that the citations to a researcher directly assesses impact, especially considering that over half of papers published in highly ranked journals go uncited!
5) for more info on problems with this index and alternatives go to http://www.harzing.com, and http://scopus.com.

Comments are closed.