In June we celebrate Father’s Day and the official start of summer. Students leave for break, and academics welcome the time to write or conduct field research.
For journal editors and publishers, however, June can be an anxious month as they collectively wait for the release of the 2010 Journal Citation Reports (JCR), and within it, the impact factor — a single numerical figure that is considered by many to reflect the standing of a title and its relative ranking against its competitors.
The journal impact factor is deceptively simple. A journal’s 2010 impact factor is derived by summing the citations made in 2010 to articles published during the previous two years (i.e,. 2008 and 2009) and dividing this number by the sum of the articles published during those two years.
I say “deceptively simple” because there is a whole litany of caveats that influence how this calculation is made, which is why those who attempt to calculate their own impact factor using the Citation Report feature of Web of Science (WoS), often arrive at different results than the figure published in the JCR. Iain Craig, Senior Manager, Market Research & Analysis at Wiley-Blackwell, helped me understand why:
- WoS and JCR use different citation matching protocols. WoS relies on matching citing articles to cited articles. To do that, it requires either a DOI or enough information to make a credible match. An error in the author, volume or page numbers may result in a missed citation. WoS does attempt to correct for errors if there is a close match, however. In contrast, all that is required to register a citation in JCR is the name of the journal and the publication year. With a lower bar of accuracy required to make a match, it is more likely that JCR will pick up citations that are not registered in WoS.
- WoS and JCR use different citation windows. WoS’s Citation Report will register citations when they are indexed, not when they are published. If a December 2009 issue is indexed in January 2010, then the citations will be counted as being made in 2010, not 2009. In comparison, JCR counts citations by publication year. For large journals, this discrepancy is not normally an issue as a citation gain at the beginning of the cycle ends up being balanced by the omission of citations at the end of the cycle. For smaller journals that publish infrequently, the addition or omission of a single issue may make a significant difference in that year’s Impact Factor.
- WoS is dynamic, while the JCR is static. In order to calculate journal impact factors, Thomson Reuters takes an extract of their dataset in March, whether or not it has received and indexed all journal content from the prior year. In comparison, WoS continues to index as issues are received.
- Differences in indexing. Not all journal content is indexed in WoS. For example, a journal issue containing conference abstracts may not show up in the WoS dataset, but citations to these abstracts may count toward calculating a journal’s Impact Factor. In addition, the definition of what is considered an “article” is often a source of controversy for journal editors. Citations from and to editorials and letters are counted toward what goes in the numerator of the impact factor, but may escape being added in the number of articles that comprise the denominator.
There are several other caveats to understanding a journal’s impact factor, but it will suffice to state that its calculation is not as simple as its definition implies and that the devil is always lurking in the details.