The ultimate value of altmetrics remains an open question. Altmetrics could provide a paradigm-shifting toolset that radically remakes the way we judge the value of a researcher’s work, or the field could prove to be merely the latest flavor-of-the-month, riding the hype cycle to an eventual niche role in the periphery of the research career structure.If the former, then a major hurdle to overcome is a reliance on factors that can be easily gamed: measurements of interest and attention. The power of publicity efforts in driving performance in these metrics offers journals a potential new arena for differentiation and added value for authors.
Any time the subject of altmetrics comes up, the question of gaming is immediately raised. But identifying actual gaming — what behaviors are acceptable and what should be considered cheating — is not as straightforward as you might think. Euan Adie, founder of Altmetric, recently posted an insightful look at the notion of gaming altmetrics.
Adie rightfully argues that it’s perfectly reasonable for a researcher to want to draw attention to his own work. If you don’t want others to hear about the results and to drive future research, then why bother working on it in the first place? But where do you draw the line between good faith efforts to spread knowledge and cynical attempts to game the system? Adie offers a variety of scenarios, from a researcher sending out a tweet about her new paper to a researcher purchasing retweets from a shady promotional service. Those extremes offer pretty clear white/black, good/bad scenarios, but the activities listed in between start to fall into shades of grey.
Adie’s examples all revolve around the actions of the researcher herself. He doesn’t begin to touch on the promotional efforts that are provided by scholarly journals (which should perhaps be added in as number 61 on this list). Journal publishers regularly do a tremendous amount of work to draw attention to articles. The range of activities includes full-blown press conferences, press releases, setting up interviews with major media outlets, blog posts, tweets, social media campaigns, Google adword campaigns, email marketing campaigns, advertising, awards, etc., etc.
It’s unclear where these sorts of activities would fall along the acceptable/gaming spectrum. They are legitimate efforts to disseminate information, and valuable services that authors appreciate. At the same time, anything that smacks of advertising is likely to induce a kneejerk reaction from the academic community, which, in principle if not in action, often considers the pure pursuit of knowledge to be above such sordid behaviors.
But if altmetrics take hold in the funding and career structure of academia, then expect marketing efforts to massively ramp up. Each university will likely build a publicity department (or expand their current one) if it means an increase in grant funding. Journal publishers will find themselves with a new means of differentiation, of separating oneself from one’s competitors in the eyes of authors: “Come publish your paper with my journal, and here’s the marketing campaign we’ll mount on your behalf.” You can almost picture the Mad Men style pitch from competing journals in the laboratory conference room (though hopefully a less smoke-filled atmosphere, unless the fume hoods are on the fritz that day).
Like nearly every other recent development in the world of journal publishing, this would favor the bigger publishers, further entrenching the current power structure. Oxford University Press, for example (full disclosure: my employer), already has in-house a large, global marketing team, augmented by a well-staffed publicity group, charged solely with reaching out to traditional media outlets as well as driving attention through social media channels including a highly-trafficked blog and more than half a dozen widely-followed Twitter accounts. There’s little chance that a small, independent publishing house or research society can offer authors the same efforts and reach that come with the economies of scale inherent to the biggest publishing houses.
So if attention becomes the coin of the realm for researchers, expect even more market consolidation and even further power granted to those journals which offer the most visible platforms to authors.
All of this may be something of a longshot, though. Ultimately, what funding agencies and research institutions want to measure is the quality of the work done, and the value of the results produced. A researcher’s ability to draw attention is not a direct proxy for curing a disease, solving an important problem or changing the way we see the world. As Adie clearly states in his post, “Remember that the Altmetric score measures attention, not quality.”
Quality remains the hole in the middle of the Altmetric donut, and without clear measures of quality, altmetrics may never take on the central role many are seeking. The metrics offered may be better suited to particular areas of research — the development of software, tools and methodologies for example, where tracking the uptake and usage of the research end product can indeed offer a direct measure of success.
Until these questions are resolved, a shift to a publicity-fueled author economy remains speculative, something of a fascinating alternative universe future that raises intriguing questions. Does a move to altmetrics merely replace chasing the journal with the best Impact Factor with chasing the journal with the strongest marketing team? Does a triumph of altmetrics signal an end to the ivory tower, a final breakdown of the separation (or at least the illusion of separation) between the academic and commercial worlds, creating a hype-fueled market where each new discovery becomes just another product to be sold to the masses? Is this the latest intrusion of the Silicon Valley/internet startup mindset into academia, where attention and popularity of a product seems more important than the actual value it generates? And perhaps most important, is this progress and the right path for academia?