A traditional, objective measure of scholarly journals is the Impact Factor (IF). First formulated by Eugene Garfield in the 1950s, the premise is to measure the propagation of ideas as expressed in subsequent research and opinion.
To calculate an IF, the Institute for Scientific Information counts citations to a particular journal in one year to papers published in the two prior years. The result becomes that journal’s IF.
There are well-known deficiencies with the IF, but I think the current deficiency that’s being revealed is the scope of it — or the lack thereof. Citation is occurring in new ways, and scientific thinking is not always propagated via the published scientific article.
Take, for instance, Twitter posts and blog posts about scholarly journal articles and findings, a good many done by peers in the field. These certainly qualify, philosophically, as propagation of scientific ideas and as published records of citation, yet these don’t count in IF scores.
Consider video, where references are concealed in a format that is locked and linear.
Consider Wikipedia, or any of the special “-pedias” out there, where scientific information is cited regularly and reliably. These are legitimate citations.
As I asked in a post last summer, are we watching the wrong things?
The IF as it currently stands is a reflection of a publishing paradigm that has been outstripped by modern communication technologies, preferences, and practices.
There’s no denying the underlying brilliance of Garfield’s observations and his realization of theory. But it is bound to the publishing technology, practices, and limitations of a bygone era.
The opportunity is apparent — adapt the IF to a communications environment that goes well beyond the printed journal, the printed article, the formal citation list.
Citation is a function more commonly and quickly realized today through linking.
If the true measure of “impact” is about measuring the propagation of scientific ideas, then the IF needs to be fundamentally re-thought. Otherwise, we’re only getting a view into printed citation lists — and those are having less and less of an impact on our intellectual lives with each passing year.
Discussion
27 Thoughts on "The Impact Factor: A Tool from a Bygone Era?"
Good point Kent,
PLoS is trying something new in this space, with our ‘Article-Level Metrics’ program. This starts from the premise that the article is more important the journal in which it is published, and then aims to place a number of new indicators (e.g. social bookmarks, usage, citations, blog coverage, star ratings etc) on every article (at the article level).
More information can be found at:
http://everyone.plos.org/2009/06/25/plos-one-and-article-level-metrics/
and
http://everyone.plos.org/2009/05/27/article-level-metrics-at-plos/
Pete Binfield
(Managing Editor, PLoS ONE)
Twitter, social bookmarking, blogs, ratings, etc. are all indicators of some broader notion of article *attention*. But remember that newspapers, magazines, radio and television were common communication technologies during the mid-1950s and no one seriously considered them to be worthwhile indicators of scientific dissemination to researchers (although they did factor into public dissemination of research).
One can reach wide dissemination in one arena and be virtually ignored in others. The Bentham Peer Review hoax received dozens of comments, scores of blog links, coverage in newspapers and magazines, hundreds of Twitter posts, but I hardly imagine that it will be a highly-cited event in the peer-reviewed journal literature.
In sum: Different dissemination channels mean different things.
I agree there are boundaries that need to exist for citation to be meaningful, but that boundary right now is potentially antiquated because it’s limited to article reference lists. Also, in the 1950s, there was no engineered, efficient way to peer into news coverage for citations. Now, Google News can do it. If a blog on Nature, a blog on BMJ, and a video on NEJM all refer to a paper, aren’t those citations legitimate? Reflective of propagation of scientific ideas? Why is this different than an editorial that cites a paper? A letter to the editor that cites an article? I think it’s partially because the impact factor’s in a rut.
One other factor to think about is that the academic blogging/twittering crowd (at least in biology) tend to have a leaning toward research in particular areas (evolution, bioinformatics) and also tend to prop up fellow bloggers/tweeters. This is only a small minority of scientists, and I’d worry that factoring this into any impact measurement would, at least as things currently understand, introduce a strange bias that’s irrelevant to the majority of scientists.
And as you note, a horrible paper is more likely to be blogged about than a fairly good one. I was thinking of the recent paper that slipped some intelligent design language past the editors and caused a furor. That shouldn’t rank higher than a paper that is scientifically more significant.
And this differs from scholarly publishing how, exactly?
Editorial bias, preferences, and predilections are endemic. Citations can point to a retracted paper or a mistake (real or alleged).
See http://scholarlykitchen.sspnet.org/2008/06/30/the-h-index-an-objective-mismeasure/ for a bit about why papers are cited. There is no clear reason, and impact isn’t always a positive thing.
Too true Kent, it’s really hard to cut bias out of a system like this. But Twitter and blogging is such a tiny minority activity that it would mean putting an enormous amount of power into the hands of a few hundred scientists, none of whom were chosen for merit. Given that there are something like 20 million plus working scientists in the USA alone, it seems wrong to have their fate determined by a bunch of early graduate students who have the time to chat online. At least with citations you’re requiring the scientist to publish an original piece of research for their vote to count. And publishing research is a common activity for most scientists, something vital to their careers, unlike blogging which is often seen as a detriment.
But you’re right in that negative citations count just like positive ones, which is not a good thing. The other big problem I have with impact factor is that it’s determined by a private company’s secret formula which they are unwilling to share with anyone else. How can we know if it’s meaningful if we have no idea what actual metric is being used?
I’ve come to believe that full article downloads should become the measure of impact. An article does not have to be cited in traditional journal media to have influence, but it does have to be read.
The Impact Factor is a proprietary product that draws funds from the STM ecosystem. It over-emphasizes the journal as article wrapper. Large publishers have departments devoted to analyzing a journal’s IF and calculating ways that it might be increased. It thereby becomes a marketing tool, and in some countries authors receive rewards directly proportional to the IF of the journal that accepts their article.
A metric based on article downloads may help level the playing field. Of course, there will have to be appropriate controls so that no one can game the system (ie., open access articles will have an advantage if the only metric is downloads).
Perhaps the industry through one of its trade associations should formulate its own measure, using as a basis the good and transparent work done by COUNTER.
Bruce Gossett
(American Society of Civil Engineers)
You point out some of the ways various people exploit the Impact Factor (or are exploited by it). I think a more sensible way to establish primacy among journals would be to run an annual, unprompted survey of scientists and ask them to write in their Top 5 or Top 10 journals, based on importance and prestige.
It would be a lot less expensive if done online, and probably just as valid, if not moreso.
Bruce: this is something that the UKSG are investigating now – details at: http://www.uksg.org/usagefactors
If blogging, twittering and Google ranking are used as metrics to determine the value of a research paper/journal, how long will it take for a concerted effort by a creationist group to push pseudoscience outlets to the top of the rankings?
Wouldn’t things like linkfarming make it far easier to game than what’s happening now?
a bunch of early graduate students who have the time to chat online
Oh, you mean like Terry Tao, Jonathan Eisen, Carl Wieman, Russ Altman…?
Blogs, Twits etc. could be an important part of a new metric: the VF – the Vanity Factor!
Richard.
All well and good to consider citations in “new” media but impact factors are already severely limited for humanities and social science research because they don’t take into account citations in scholarly books.
We now have the technology to change that and I’d like think that would be higher up the priority list.
I would add to Jo’s comment that Impact Factor only takes into consideration the journals indexed by Thomson which is a tiny subset of the the scholarly literature. Dose this mean that if you are not indexed in the Web of Knowledge, you don’t exist?
Going on a slight tangent, I’d like IF to be broadened to include use of data sets via citations of somethin like a DOI attached when shared (by a vestigial journal that only asks for a form with community-specified minimum info in, and ultimately direct from databases). I’d also like a universal person ID (for citation purposes) at which to bank all such citations (and in passing to do away with this issue: http://tinyurl.com/aqtpbn) and maybe even to bank other kinds of credit (time spent reviewing? training?)
This is interesting for that: http://thedata.org/ and there are lots of discussions and other things appearing, but it has to be more or less universal (DOIs as a component would help, maybe with OpenIDs, in some kind of registry).
If one needs all this data like blog coverage and downloads to see the value of an article does that not mean that I need to wait some month after the article came out to see if it is a good one? If a paper appears in Nature etc. I would assume it is worth reading right away. The same problem appears if I’m a young scientist applying for my first grant and all of my papers have just been published recently. How could I convince people that the research I did has a gread impact if this will only be seen some month/years later?
Blog coverage and downloads occur much more quickly than citations. The issue you’re getting at is brand power. We assume that the most important papers find their way into the most important journals. There is a strong correlation, but cause-effect is a bit harder to ferret out. Also, the utility of a paper may be hidden by this. Many clinical/practitioner/bench journals publish useful studies that aren’t cited because they’re used instead. With the current impact factor system, citations take months or years to accumulate. We live in a faster, more fluid information environment than the one Eugene Garfield predicated his system upon.
Torsten – you are thinking about the journal in a somewhat traditional manner though I think.
Why should the quality of an article be a binary thing? (i.e. “it is in Nature, therefore it is good. vs. It is not in Nature, therefore it is bad”). An alternative could be to publish everything that passes peer review but then have the peer reviewers and or editors assign a ‘grade’ to each paper. In this way, the grade could be used as a “day one” indicator of possible value of the article, as you requested (and it would have been measured by the same system which is currently used to make accept/reject decisions on a selective journal) and this group of “highly graded” papers could then become a high quality brand of their own.