NISO has released the results of their year long study of Altmetrics in draft form for comment.
Should attention metrics play any role whatsoever in researcher assessment?
Last week, an editorial in Nature highlighted the problem of the proliferating number of authors on papers. Following a 2012 symposium at Harvard University, a small group has proposed a taxonomy of contributor roles that would add details to an author list and have tested that among a group of authors. Scholarly publishers should consider adopting this taxonomy to improve the accuracy and granularity to improve attribution and the assignment of credit.
Can machine readable articles, built on author/editor/publisher curated declarative statements and the associated data (or links thereto), be a way of generating metrics that get us nearer to a ‘standard candle’ of scientific research output?
EBSCO has recently acquired altmetrics startup Plum Analytics. What will this mean for both companies and altmetrics in general?
Revisiting Todd Carpenter’s 2012 post on the value of altmetrics.
Scholarly Kitchen chef Todd Carpenter discusses technical standards in today’s scholarly-publishing landscape, and what’s on the horizon.
Does the rise of altmetrics mean a shift in the journal publishing landscape where marketing and publicity efforts surrounding articles take precedence?
The design and construction of article performance measures can reveal deeply held biases.
Librarian Jeffrey Beall talks about his list of predatory open access journals, the potential pitfalls of article-level metrics, and more.
Chef Phil Davis discusses the current state of the art in analysis of citation, usage, and other information sources, and some of the opportunities and challenges for bibliometrics in a data-rich era.
An advocate for alternative metrics for article impact takes stock of where they are now, and where they’re going.
Social networking and crowdsourcing have attributes that may make them both incompatible with the goals and process of science. Can we accept that?
Recent data from the Guardian suggests that commenting remains a fringe activity, often dominated by a few voices. What might this mean for initiatives based on altmetrics and post-publication review?
Framing “altmetrics” as alternative may limit their potential — they have to be “alternative” to something already in existence. How do we move new measures robustly into the mainstream?