My Hero, Zero
My Hero, Zero (Schoolhouse Rock, 1973)

What constitutes a high number of article citations? It often depends on the field.

100 citations to a paper in cancer biology — a field dense with researchers, huge grants, short papers, and fast publication times — may not necessarily have the same impact as an economics paper with the same number of citations.

For this reason, comparing citation impact across disciplines is widely considered verboten.

In 2008, three Italian researchers tried to change that. In a paper published in PNAS, Filippo Radicchi, Santo Fortunato, and Claudio Castellano argued that it was possible to normalize citations across disparate disciplines so that the performance of papers published in different fields could be compared. Their technique was very simple — divide the number of citations to a paper by the average number of citations to all papers in its discipline for that year. As I reported in 2008, this simple transformation appeared to line citation distributions up like ducks in a row. With a universal citation metric, it would be far easier to evaluate the relative impact a paper, and its authors, had on science.

Such a discovery was big, bold, and beautiful. But like other novel and important claims made in science, it didn’t take long for other authors to attempt to validate the claim.

In a paper published in the January issue of the Journal of the American Society for Information Science and Technology, three researchers at the University of Leiden (Ludo Waltman, Nees Jan van Eck, and Anthony van Raan) dispute the universality of citations.

In their paper, titled Universality of citation distributions revisited,” Waltman and others expanded their analysis from 20 disciplines, reported in the original Radicchi paper, to all 221 disciplines in the sciences and social sciences, as classified by Thomson Reuter’s Web of Science. After counting the citations to each paper after ten years, Waltman applied Radicchi’s normalization technique. Did the citation distributions for every field indeed line up? Not exactly. Waltman writes:

[M]any fields of science indeed seem to have fairly similar citation distributions. However, there are quite some exceptions as well. Especially fields with a relatively low average number of citations per publication, as can be found in the engineering sciences, the materials sciences, and the social sciences, seem to have nonuniversal citation distributions.

Apart from the limited sample in the original sample, Waltman describes two oddities with the Radicchi paper that may explain their divergent results:

First, Radicchi included research articles and letters in this analysis. Waltman finds this mix curious since research papers have a very different citation profile from letters.

Second, and more importantly, Radicchi excludes all uncited papers from his analysis. As many papers remain uncited over the years, Waltman questions why these were excluded from Radicchi’s analysis since they are valid observations. The Radicchi paper does not explain why uncited papers were excluded from his analysis, only that including them does not change their results — a claim that Waltman disputes by showing how the exclusion of uncited papers changes the distribution of citations, making the universality claim “more justifiable.”

Zero is a very important number in scholarly communication. For many indicators of scholarly impact (citations, comments, blogs, and tweets), zero is the most frequent number encountered. It forms the anchor of a long tailed, skewed distribution in which most attention is lavished on a coveted few. Remove zero, and you’ve changed the very nature of that distribution.

To quote Schoolhouse Rocks,”Zero is a wonderful thing. In fact, Zero is my hero.”

Phil Davis

Phil Davis

Phil Davis is an independent researcher and publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist. His research has focused on the on the dissemination of scientific information, rewards and incentives in academic publishing, and economic issues related to libraries, authors and publishers.

View All Posts by Phil Davis

Discussion

4 Thoughts on "Universal Citation Paper Lacks Universality"

As an Editor-in-Chief, I always look at which articles have received zero cites. Basically, they represent a failure to attract interest, a lesson which can be applied to future selections. (It’s not a strict criterion, of course; sometimes you have to take a chance.) Not considering the zero cites seems like a poor way to compare journals.

Let me make a small addition to Phil’s story. Recently, a new paper by Radicchi and Castellano was published in PLoS ONE (http://dx.doi.org/10.1371/journal.pone.0033833). This paper confirms that universality of citation distributions indeed does not hold generally. The authors propose an interesting alternative approach for making proper comparisons of citations in different fields of science.

Ludo Waltman

I find that the authors are playing with the word “universality.” In their latest PLoSONE paper, they use phrases like “universal properties,” which is admitting that their transformation does not hold for all fields, that variation exists among fields, and that there are exceptions.

The transformation is almost linear for the majority of the subject-categories. Exceptions to this rule are present, but, in general, we find that all citation distributions are part the same family of univariate distributions. In particular, the rescaling considered in [our PNAS paper], despite not strictly correct, is a very good approximation of the transformation able to make citation counts not depending on the scientific domain.

While near-universal findings are commonly reported in much of the scientific literature, we are dealing with the construction of a tool to evaluate the impact of science and the career trajectory of scientists. For this reason, creating a simple metric that is a “good approximation” may not be good enough.

Leave a Comment