Article Attention Scores for papers don’t seem to add up, leading one to question whether Altmetric data are valid, reliable, and reproducible.
A paper linking tweets and citations comes under attack, but more from the authors’ inability to answer even basic questions about their paper and resistance to share their data.
Despite the near consensus about the popularity (or lack thereof) of commenting on academic articles, there is surprisingly little publicly available data relating to commenting rates. To address this, a team of academics from the Universities of Sheffield and Loughborough have recently published research into article commenting on PLOS journals. Simon Wakeling, Stephen Pinfield and Peter Willett report here on their findings.
Thoughts on Elsevier’s acquisition of Plum Analytics.
After many and long conversations among colleagues within and beyond the Scholarly Kitchen
about what researchers need to know about scholarly publishing, Alice Meadows and Karin Wulf compiled a list of what we think to be the most urgent issues.
Robert Harington asks Tim Collins for his views on publishing industry trends seen through the prism of his leadership role at EBSCO, exploring Tim’s sense of a connected world of stakeholders in today’s publishing industry.
The Open Syllabus Project has created a database of over 1 million college syllabuses and extracted the names of the materials used in these courses. These materials are analyzed quantitatively and ranked. The creators of the service propose a new metric for the evaluation of academic publications.
Of the many ways to measure the quality of a publication, one that is often overlooked are the workings of the marketplace itself. Purchases for published material is made in large part on the basis of the quality of that material, making the marketplace something of an editor of genius. This mechanism incorporates all other metrics, from impact factor to altmetrics. Unfortunately, the marketplace is not free to exercise its judgment when many participants seek dominant and even monopolistic control.
Charlie Rapple reports on the 2:AM conference, which celebrated five years of altmetrics and considered what we should aspire to achieve in the next five years
Criticisms of altmetrics often seem to be equally applicable to other forms of research assessment, like the Impact Factor. Phill Jones suggests that is not because of a fundamental opposition to altmetrics but a fear that it will suffer the same pitfalls. The solution is to engage more with a somewhat neglected set of stakeholders; Informaticians.
There is no shortage of critique of citation metrics and other efforts to quantify the “impact” of scholarship. Will a report calling for “responsible metrics” help researchers, administrators and funders finally wean themselves?
When we talk about impact and metrics and understanding the customer, we are actually talking about surveillance data. We should have an open debate about what this means.
Late last year, Nature Publishing Group embarked on an experiment to allow users to share content. Some commentators accused NPG of using controlled sharing to snoop on customers. In this post, Phill Jones explores the difference between aggregated usage data and spying on users.
Altmetric’s annual top 100 list provides an opportunity to see what science reached the general public and to think more about what information altmetrics really provide.
Guest Chef Phill Jones takes a look at an often under-recognized population of researchers and suggests why publishers should give them more attention.