Dow Jones Industrial Average (2008)
Image by mujitra (´・ω・) via Flickr

Thomson Reuters is launching a new service called InCites, an analytical suite designed to let users answer questions like:

  • How many papers did my institution/country produce?
  • Which papers are most influential in which field?
  • What authors are rising stars?
  • How can I know individual researchers’ H-index? Or a department’s?
  • Is my institution’s research focus changing?
  • How does my institution compare to peer institutions — or aspirational peers?
  • What are the strongest fields at my institution? Which ones need improvement?
  • What is the average citation rate at my institution? Or in selected fields?
  • Who is collaborating with whom? And how often?

It sounds kind of interesting at first glance, and I’m sure the charts and graphs will be seductively beautiful, the data hard to resist.

But I feel myself rebelling more and more at the seemingly endless quantitation of science publication.  The slicing and dicing seems to go on without end! And I don’t want to pick on InCites in particular — it’s just the Example du Jour.

In the wake of the financial crisis, in which we followed measures like the Dow Jones Industrial Average and our 401(k)s right off the cliff, you’d think we’d be more skeptical about numbers and their use, misuse, and intrinsic value. The Dow is a great example. We seem to worship it, but it’s a short-term measure that masks complexity and isn’t outcome-oriented. Instead, it’s 30 companies in a stock market of 1,400+ companies, it changes daily, and it measures the process of trading, not the outcome of wealth.

I started to worry about this about a year ago, and posted on the topic (the h-index, etc.). To reiterate some points from that post, citations occur for a lot of reasons. Some citations are perfunctory — I’m citing this to baseline my paper or opinion, to buy credibility, as the price of entry. Some citations are rhetorical — I’m citing this because it makes a point I only want to make tangentially or dismiss indirectly. Some are negative — I’m citing this because it’s wrong.

Counting doesn’t differentiate. It quantitates. And I wonder at InCites or any other system’s ability to actually predict rising star authors, changes in research direction, or areas needing improvement — without contributing a self-fulfilling prophecy effect. That is, authors counted as rising stars will be treated as rising stars. Ipso facto, they are rising stars. Not counted as such, they might have stumbled on their next research assignment and flamed out, but with the backing of InCites or some other number, that stumble will be harder to observe, its effects dampened, a quantified reputation breaking the fall.

Citations aren’t data. Citations are academic rhetoric, social capital, and intellectual favoritism. Counting them as equivalent items in a unified dataset and then putting the results in a framework that seems to allude to short-term, simplified, and process-oriented analytics gives me pause.

Reblog this post [with Zemanta]
Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

2 Thoughts on "InCites — More Counting, But Does It Count?"

This gets to the heart of the matter: a good methodology applied to the wrong domain. Citation-counting is pseudo-science. And it’s a broad social problem, when you consider such things as the SATs, which attempt to put a number on something as complex as human intellectual capability.

Comments are closed.