We’ve come to accept citations as a measure for success in science, although not without caveats. The Institute for Scientific Information (ISI) regularly reminds us that thou shall not compare citation performance across subject disciplines. It is so often written, it should be a commandment listed right after murder, or perhaps before coveting. As a result, it’s difficult for an outsider to gauge whether a paper with 20 citations in Aerospace Engineering is as successful as paper in Developmental Biology with 100 citations.
Three Italian researchers (Filippo Radicchi, Santo Fortunato, and Claudio Castellano) now claim that differences between disciplines can be quickly remediated by a simple, intuitive calculation: divide the number of citations to a paper by the average number of citations to all papers in its discipline for that year. The effect is stunning, and seems to hold irrespective of the publication year studied. Looking at the effect of this normalization is like looking at ducks lining up in a row.
Their manuscript, “Universality of citation distributions: towards an objective measure of scientific impact” was published online October 31st in the . A final manuscript is available from the .
The authors provide no theoretical explanation or mathematical proof for why their technique works. It just does. I can hear the sound of thousands of pencils scratching scalps with exclamations of, “Why didn’t I think of that?!”
As an aside, the authors note that similar universality is found in elections when one compares the proportion of votes to candidates. In other words, there may be something quite analogous in the citation process to the voting process. I’m hesitant to continue the generalization that article downloads are also similar to votes. In fair elections, you can only vote for a candidate once.
The implications of these findings are that a general form of the Hirsch Index, a measure for evaluating the success of an individual author, can also be applied across fields. This may be welcome news for tenure and review committees, which are often composed of individuals across many departments at a university and who don’t intuitively understand the nature of each other’s literature. It may also make citation counting too tempting when evaluating academics. If normalized citation counts can be validated, “normalized Impact Factors” may be just around the corner.