Recently, I was surprised to find myself in the midst of some academic administrators who were comparing notes about how they use and rely upon the “h-index.” The goal of the h-index is to quantify a researcher’s apparent contribution to the literature, rewarding diverse contributions more than a single blockbuster contribution.

I was under the impression that the h-index was an obscure measure that hadn’t hit the mainstream yet. I was wrong. It’s not incredibly prevalent, but it is being used, and if you’re an academic, your tenure track may have the h-index on it.

But there is a vital question about the h-index:

How fair is it to have your academic contributions boiled down to a single number?

A recent paper from the Joint Committee on Quantitative Assessment of Research discusses the potential pitfalls with over-reliance on quantitative measures like impact factors and the h-index.

I found the paper refreshingly frank. It’s from mathematicians, and they caution:

. . . it is sometimes fashionable to assert a mystical belief that numerical measurements are superior to other forms of understanding.

Amen. Judgment and wisdom are undervalued in many places these days, and this “mystical belief” in quantitative measures often subverts both.

Interestingly, the h-index has offshoots: the “m-index” (in which the h-index is divided by the number of years since the academic’s first paper, to make it easier for young faculty to compete) and the “g-index” (which smooths away the biggest papers).

The validity of measures like the impact factor, the Eigenfactor, the h-index, and others has been assessed using “convergent validity,” or how well these correlate to each other. As the authors put it:

This correlation is unremarkable, since all these variables are functions of the same basic phenomenon — publications.

Circular logic, indeed.

Papers are cited for all kinds of reasons, most of them rhetorical. In fact, even non-rhetorical citations have been found to have a number of motivations beyond positive intellectual debt. Yet, most people utilizing simple measures like impact factors and things like the h-index are unaware of the pitfalls and nuances.

These measures are all backward-facing, as well. The past is often a poor predictor of the future. That’s another problem with these.

However, if we were to accept that these measures are inherently sloppy and subjective even if they look clean and objective, reflect reputation and “halo effect” as much as academic contribution, and need to be viewed as proxies about the past, they might just work.

As a famous scientist and mathematician once said, “Everything should be made as simple as possible, but not simpler.”

Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

3 Thoughts on "The “h-index”: An Objective Mismeasure?"

Henry Small also promoted the idea that citations function as *concept symbols*, essentially a shorthand for an idea expressed by another author.

For instance, if I cite Watson and Crick (Nature, 1953), most scientists will know that I’m referring to the concept of the double-helix of DNA, and I don’t need to go any further to describe the work.

By analyzing the words around a citation, it is possible to create a collective interpretation of what that document stands for. The act of authorship and citation-making can therefore be viewed as a communal dialog.

Small, H. 1978. Cited Documents as Concept Symbols. Social Studies of Science 8: 327-340.
DOI: 10.1177/030631277800800305

Comments are closed.