Altmetric have released their annual Top 100 list, the articles that their measures point to as having received the most attention for the year. As in 2013, the list is fascinating for what it tells us about communication between scientists, the attention paid to science by the general public, and also for what it tells us about altmetrics themselves.
The article drawing the most attention for 2014 describes the hugely controversial study Facebook performed to see if they could emotionally manipulate users, and articles about social media appear throughout the list. This is not particularly surprising given the emphasis that Altmetric places on social media sharing, as well as the navel-gazing that dominates such forums. People using social media to talk about social media seems par for the course. Nutritional studies and various diets continue to draw a great deal of attention, as do clever joke and semi-joke articles about things like James Bond, chocolate consumption and time travelers which also make the top 10.
Controversy always scores well with altmetrics. Along with the Facebook article, two retracted papers on STAP stem cells make the top 20. Schadenfreude led to an otherwise unremarkable paper on melanism being the second highest ranking article of the year because the authors and editors forgot to delete a note to themselves about whether to cite “the crappy Gabor paper.” I suspect that this would have been surpassed by the “Take Me Off Your F–king Mailing List” paper if Altmetric included predatory journals in their rankings.
And then there’s the just-plain-weird, which always gets people talking. Odd sexual development, brain-to-brain communication and mysteriously moving rocks all make the top 15. This is all well in line with what we saw last year, and then as now, if you dig past the noise, you can start to see some of the more important stories of the year that reflect the public’s concern (the Fukushima meltdown in 2013 and Ebola in 2014).
Altmetrics tell us a great deal about what captures the attention of the public, as well as what scientists are talking about among themselves. It would be interesting to try to separate out the two–which results here show the gossip within the scientific community, and which represent the interface between the community and the general public? It’s all interesting stuff, ripe for speculation as to how this reflects modern society. Also of interest, according to Altmetric there’s no evidence in this list that open access publishing makes an article more likely to get shared and discussed.
But what altmetrics don’t tell us is much about the best or the most important science done during the year. To be fair, Altmetric is very up front about this, stating:
It’s worth mentioning again that this list is of course in no way a measure of quality of the research, or of the researcher.
There’s general consensus that the Impact Factor should not be the primary methodology for measuring research impact. No one, however, seems particularly sure about what to use in its place. While a deep, expert-led understanding of a researcher’s work is always the ideal, practical considerations make this difficult, if not impossible in many ways. As funding agencies and research councils struggle with these questions, there are often suggestions that altmetrics should play some role, in particular as a way to measure a work’s “societal impact”.
But looking at this list, it would seem that altmetrics are not yet up to that task. People finding something funny or interesting does not necessarily translate into “impact”, no matter how fuzzy that term may be. Actual impactful research, a new diagnostic technique put into broad use or a historical examination leading to a change in legal policy, is likely to be drowned out by a paper claiming that dogs line up with magnetic fields when they defecate (number 5 for the year). How do we separate the serious wheat from the chaff when the chaff performs so much better in the favored measurements?
These year-end lists serve as great reminders that altmetrics isn’t quite there yet. The kitchen sink approach, including nearly anything that can be measured, still needs refinement. The big questions remain–what do measures of attention really tell us, does this in any way correlate with importance, quality or value to society, and is there something else we should be measuring instead?