Team Impact
Team Impact (Photo credit: Wikipedia)

Sitting through multiple presentations at last week’s STM Innovations Seminar in London, which focused on metrics and altmetrics, the audience was exposed to the word “impact” in many contexts — referring to the impact factor, usage factors, social media awareness, downloads, and various somewhat acceptable and more less acceptable attempts at meaningful correlations.

I started to notice how drained the term “impact” had become by the end of the day. It has a specific meaning in scholarly publishing, but one that is being blurred. “Impact” is such an attention-grabbing word, well, you get more impact if you use it, even if it’s a stretch. Of course, the wear and tear on “impact” was followed by similar trafficking of the term “factor,” which was trotted out nearly as frequently.

But is the habit of using “impact” or “factor” a sign that alt-metrics is a bit too beholden to trying to recreate the impact factor?

Originally, as one attendee noted, Eugene Garfield developed the impact factor to cut through the clutter of less objective and transparent attempts to assert authority and prestige. “Impact” had a specific meaning — impact on subsequent thinking of working scientists, realized through active citation in the literature.

This narrow definition is worth preserving. We need a word that touches on an endpoint that is demonstrable and intellectual, not speculative and work-based.

Choosing the right term is important to getting to the right idea. If we stay stuck in the land of “impact,” we remain attached to established ideas, even subconsciously, which inhibits novel ideas.

For instance, awareness via social media does not strike me as impact. It is awareness. Awareness alone is not going to — or at least should not — generate intellectual outputs. It’s too shallow. It’s not unimportant, however, but we should call it what it is. Awareness is correlated with usage, which is not surprising, but worth noting. When is the “awareness” idea exhausted? Pretty quickly, I fear.

Usage is not impact. Current usage measures have many limitations, including no integration of print usage, no integration of pass-along usage, and no information about the strength of engagement (e.g., glance, read, distribute, save). Usage is important to note, but our current usage measures are about counting transactions, and they do not reflect any measure of intellectual output or long-term votes of value through actions like citation.

The impact factor has problems, but it has endured, and continues to provide some good information in an interesting and useful context. I hope new metrics emerge that help shed some light on the value of information’s journey through science. But if we continue to refer to every potential new measure as having “impact,” we’re showing an unnecessary fealty to the impact factor, an uncertainty about the independent value of our new ideas, and a lack of precision about exactly what new metrics are measuring.

Conflating new ideas with established ideas isn’t helping us think clearly or find ways to leapfrog where we are. Mental habits are a major factor in creativity — you have to break habits to think differently. And it’s hard to have an impact when everything is “impact.”

Enhanced by Zemanta
Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

11 Thoughts on "Stretching "Impact" By Many "Factors" — Signs of Thrall and Uncertainty?"

The word “impact” can be misleading, however, if you take it to be an honorific term, connoting just positive influence, as I think many people are inclined to do. But citations can proliferate just as much for instances of poor work that is being exposed as such by subsequent critics. I think, for instance, of the reception of The Bell Curve or the reaction to claims about cold fusion, which produced many citations but were hardly measures of high quality of the work criticized.

In my experience citations are almost always made because the paper being cited is a worthwhile contribution. Authors rarely trouble to go out of the way to cite a bad paper in a negative context, and would probably look mean and petty if they did. There certainly are a few papers that get a lot of citations because they’re bad, but that definitely doesn’t mean that this sort of citation is anything other than a tiny minority.

My research on the logic of citations supports your observation Tim, that bad work is not often cited, although there is a subset of citations to good prior work that did not solve the problem being addressed in the citing article. In a sense such citations are negative. Some of the altmetrics however are very likely to pick up what we might call negative awareness, including scandals and exposures of fraud. This is a serious problem for them, as they do not distinguish news and rumor from the communication of new knowledge.

But then understanding negative awareness is also important. For example the case where retracted articles continue to be cited. Scientific communication is a complex ecological system that needs to be measured in many ways. As Sandy points out the bad has impact too.

I don’t think that there’s any sense that ‘altmetrics’ are trying to recreate either the function or the usage of Journal Impact Factor, although it’s fair to say that there’s a certain amount of work being done in that area. At least two of the speakers at STM Innovations Seminar addressed this area. However, as I said in my presentation, it’s far more complicated than that. Awareness, communication, consumption, recommendation and citation certainly have relationships – and that these relationships will be different depending on discipline, time and whether the consumer has scholarly or lay interest. I would say (and I suspect most of my colleagues would agree) that ‘impact’ is one dimension of the different kinds of behaviour I describe.

I thought your presentation was very good, and pushed the thinking, which is what I think we need. My concern is that continuing to stretch old terms across new ideas doesn’t help. Let’s be clear and find new things that matter.

Social metrics such as those being discussed as ‘altmetrics’ should be a welcome addition to what appears to be an emerging suite of digital metrics for an article. We all realize that social media is a form of learning and additional exposure to content and ideas. It should be a great addition to our understanding of our content consumption. These new altmetrics are still being measured and analyzed and from what I can tell even those advocating for them aren’t sure of what to make of the data measurements – yet. I agree with Kent when he says that the narrow (and current) definition of Impact is worth preserving. I believe it is. I’m intrigued by what new measurements altmetrics might yield – but I see them as a complement to the traditional impact factor at the moment.

I am inclined to keep the word impact, as in the impact on subsequent thinking of working scientists, and not surrender it to the impact factor. Impact is a perfectly valid concept. The real problem is that we do not understand impact, or rather how it occurs, so we cannot measure it very well. The various measures being explored are arguably useful, but none is even close to the real thing, and we can all see that. Creating new words is not going to change this problem because impact is actually what we are looking for.

The challenge is to first understand the phenomenon of impact, then we can properly measure it. This is a central problem in the science of science. (Disclaimer: it is also one of my research areas. See http://www.osti.gov/innovation/research/diffusion/index.)

I’m not arguing for creating new words, just being comfortable with real words that make more sense. A social metric is about sharing or distribution or reach or handoffs, not about impact. A reading metric is about reading or views or downloads but not about impact. Neither leaves a record of having had an impact. They don’t fulfill the “Koch’s postulate” of impact (entity, acquisition, perpetuation). They may be more important than the impact factor in some fields, especially very clinical fields or those dominated by practitioners, where you wouldn’t expect a lot of citations to come from. Being clear about what we’re measuring, why, and for what end will only help us add nuance to richer data. As someone who has seen concepts trapped in language before, only to be liberated by the right words, I’d recommend clarity over cliche here.

Indeed, if we had a good model of the complex interactions that make up scientific communication we would see that we are measuring a lot of different aspects of the system, each of which is important but none of which is the total impact of an idea, article or person. We will probably need new words to describe these aspects. The concept of total impact per se might turn out to be too vague to be useful.

An analog might be the concept of quantity of motion that was widely discussed and debated in the middle ages. Those discussions led to the key distinctions between speed, velocity, momentum, kinetic energy, inertia, etc, in short modern dynamics. The science of the dynamics of ideas still awaits us but we are getting very close to it. This is where all these new metrics really come in, along with network research and a bunch of other stuff. We will understand where ideas go and what they do when the get there.

A critical, time tested metric for subscription journals is revenue. What people will pay for your product is a measure of its value. One of the flaws of communism is removing the price signal from investment decisions, leading to inefficient use of capital. I fear we are heading down much the same road with open access. If a journal is free to users, we have lost a valuable marker of its value.

Comments are closed.