We’ve come to accept citations as a measure for success in science, although not without caveats. The Institute for Scientific Information (ISI) regularly reminds us that thou shall not compare citation performance across subject disciplines.  It is so often written, it should be a commandment listed right after murder, or perhaps before coveting.  As a result, it’s difficult for an outsider to gauge whether a paper with 20 citations in Aerospace Engineering is as successful as paper in Developmental Biology with 100 citations.

Three Italian researchers (Filippo Radicchi, Santo Fortunato, and Claudio Castellano) now claim that differences between disciplines can be quickly remediated by a simple, intuitive calculation: divide the number of citations to a paper by the average number of citations to all papers in its discipline for that year.  The effect is stunning, and seems to hold irrespective of the publication year studied.  Looking at the effect of this normalization is like looking at ducks lining up in a row.

Normalized citation curves for various disciplines (reproduced with author's permission)

Their manuscript, “Universality of citation distributions: towards an objective measure of scientific impact” was published online October 31st in the Proceedings of the National Academy of Sciences.  A final manuscript is available from the arXiv.

The authors provide no theoretical explanation or mathematical proof for why their technique works.  It just does.  I can hear the sound of thousands of pencils scratching scalps with exclamations of, “Why didn’t I think of that?!”

As an aside, the authors note that similar universality is found in elections when one compares the proportion of votes to candidates. In other words, there may be something quite analogous in the citation process to the voting process. I’m hesitant to continue the generalization that article downloads are also similar to votes. In fair elections, you can only vote for a candidate once.

The implications of these findings are that a general form of the Hirsch Index, a measure for evaluating the success of an individual author, can also be applied across fields. This may be welcome news for tenure and review committees, which are often composed of individuals across many departments at a university and who don’t intuitively understand the nature of each other’s literature.  It may also make citation counting too tempting when evaluating academics.  If normalized citation counts can be validated, “normalized Impact Factors” may be just around the corner.

Reblog this post [with Zemanta]
Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist. https://phil-davis.com/

Discussion

6 Thoughts on "Universal Citations"

Is this type of normalization not very similar to that of the CNTS ‘crown indicator’? While I don’t want to discredit the authors’ work, I would venture that the idea in itself is not so new.

L.L. on SIGMETRICS says “normalization is the same as the Mean Expected Citation Rate which the Hungarian group uses already for two decades or so.”
WRT voting – I saw an article in one of the ACM or IEEE magazines where that was a suggestion… can’t remember where now.

Following up on my previous comment: there are several ‘normalized’ citation counts that have been proposed and used over the years. As pointed out by Christina, already in 1983 did Schubert, Glänzel and Braun normalize citation counts in a similar fashion. The crown indicator is just a further refinement of this idea: divide the average citation count for a group of publications by the avergae for all publications of the same type, in the same field, in the same year.

The main difference is that these older indicators typically use the mean citation rate of a group of articles as the numerator, whereas Radicchi et al. apparently use the number of citations for an individual paper. This has recently (and, as far as I can tell, independently) been proposed by Jonas Lundberg. Still, the PNAS article certainly has some interesting contributions, so I don’t want to imply that it has no merit.

Comments are closed.