2AM Asmterdam logoIs there a name for that sensation that “it seemed like yesterday … and yet, also, forever ago”? That’s how I felt on learning that it’s already / only five years since the “altmetrics manifesto” was published. At last week’s 2:AM conference in Amsterdam, the authors of that manifesto were brought together in person for the first time. Yes, Jason Priem, Dario Taraborelli, Paul Groth and Cameron Neylon – who together articulated such an influential vision for a new era in research evaluation and filtering – put all those thoughts together by email. (What a triumph for our digital age.) Some of those emails were shared during a celebratory session at 2:AM (video), and thus we learn that the team’s aspiration for launching the manifesto was a post right here in the Scholarly Kitchen, and that there were enjoyably heated discussions about whether “altmetrics” should be hyphenated.

More importantly, the four shared thoughts on how far we have come in five years (Cameron: “People are embarrassed if it’s revealed that they only look at Impact Factor data for research evaluation. All the major publishers are using altmetrics in some shape or form.”) and what they would change if they rewrote the manifesto today (Not much. Jason: “This has always been a movement to fundamentally change how research works, with more scalable filters and recommendations than peer review.” Paul: “It was barely about evaluation and I would even take out what there was – and focus more heavily on improving the research process.”) So, then, where should altmetrics go next?

  • Better shared infrastructure to allow more open services to exist (Dario)
  • Figuring out what (collections of) social actions actually mean (Cameron: “We tell ourselves stories but we’ve never really had any evidence for them.”)
  • More collaboration in pursuit of real interventions and changing practices (Paul: “Science is a team sport but too often we treat it like it’s boxing.”)
  • Moving the focus from “different metrics on established products” (articles) to metrics on new products, such as software (Jason)
  • De-westernizing our recognition and interpretation of different signals and indicators (Cameron).

Naturally, this closing session reflected much of the discussion that had preceded it during two days of intense conferencing. Among this group of advocates and experts, we’re over the honeymoon phase with altmetrics, and looking for transparency, standards and comparability. We want altmetrics with everything – non-publication content; educational rather than research content – and to extend their application to bigger questions, such as evaluating gender bias in grant selection, or the extent to which supervisors influence career success. But there was acknowledgement that broader awareness or understanding of altmetrics is still low, with too much attention focused on whether or not altmetrics correlate to citations (Juan Pablo Alperin of Simon Fraser University / the Winnower: “Of course altmetrics don’t correlate to citations. That’s why they’re alt.”)

There was back and forth on that point, but the view tended towards altmetrics being more complicated than citations, reflecting as they do such a range of different actions, with so many different – and little understood – motives (see bullet 2, above). It was argued that we equally struggle to understand the intent behind citations, and that – as with many innovations – there are double standards, with the newcomer expected to prove itself far beyond what we expect of the entrenched norm. There is a pervading concern about academic snobbery – whether those who impugn altmetrics do so because of a fear of the unknown audiences and intentions that they represent – and, ultimately, a strong desire to see more research into precisely who is “creating” altmetrics (Alperin reports on his grassroots efforts to explore this at about 1 hr and 6 mins of this video). That brings us back to the original manifesto, which its authors concluded by describing as “a call for more research”. Great strides have been made on that front in the last five years. Here’s to the next five.

PS If you still couldn’t comfortably explain what altmetrics are, here’s my 1-minute “out of the box” video attempt to explain the Altmetric donut:

Charlie Rapple

Charlie Rapple

Charlie Rapple is co-founder of Kudos, which showcases research to accelerate and broaden its reach and impact. She is also Vice Chair of UKSG and serves on the Editorial Board of UKSG Insights. @charlierapple.bsky.social, x.com./charlierapple and linkedin.com/in/charlierapple. In past lives, Charlie has been an electronic publisher at CatchWord, a marketer at Ingenta, a scholarly comms consultant at TBI Communications, and associate editor of Learned Publishing.

Discussion

4 Thoughts on "Celebrating Five Years of Altmetrics"

I think both congratulations and caveats are in order here. The speed with which the concepts behind altmetrics (life in a digital environment creates new opportunities for analysis than a print environment offers) has been impressive, as has their uptake across the journals publishing spectrum.

To be fair though, while these are tremendously useful tools, they have yet to provide much by way of useful data for researcher/research assessment. With a few exceptions (citation in policy documents), most measurements are indicators of attention, rather than value, and so there has been little offered to drive academia away from the flawed Impact Factor as the one metric to rule them all.

I also think there should be some concern raised about altmetrics concentrating around one or two privately-owned companies who have their own internally determined algorithms and practices that result in a researcher/article score. Different mentions get weighted with different values and I worry that this may lead to the same sort of lack of transparency and capriciousness which has long been problematic for the Impact Factor.

Hi David

I think more transparency around altmetrics aggregators would be helpful. There was a nice talk at the conference by Zohreh Zahedi looking into differences between aggregators and calling for more transparency.

I think at a minimum the discussion around altmetrics has helped people be able to talk about other forms of contribution. In particular, data and software are getting additional focus although in the form of citation.

I would also point to Kristi Holmes work on altmetrics for translational science as a good example of how this plethora of metrics is helping science. http://www.slideshare.net/mobile/kristiholmes/understanding-impact-through-alternative-metrics-developing-librarybased-assessment-services

David, I beg to differ. If the IF were flawed folks would flee rather than continue to use it. Or as Churchill said “Democracy is the worst form of government, except for all those other forms that have been tried from time to time.”

Are you suggesting that the Impact Factor is perfect in every way? That it is the ultimate, ideal metric for all of the decisions for which it is used? That it is a perfect diamond, each facet brilliant with no flaws whatsoever?

While I would stipulate that it is a useful measurement, it does indeed have flaws (it’s slow–always looking back 2 years, it covers a limited time span (some articles true impact is seen after 2 years), it can be greatly influenced by a small number of outlier articles, it favors review articles over original research reports, it is difficult to compare Impact Factors between disciplines, and it misses out on many types of impact (changes in clinical practice, patents, etc.)). To suggest otherwise is absurd.

Comments are closed.