Impact Metrics on Publisher Platforms: Who Shows What Where?
A review of 12 major publishers finds that they display an average of 6 journal-level impact metrics on their platforms. The Journal Impact Factor is the only metric displayed on all 12.
A review of 12 major publishers finds that they display an average of 6 journal-level impact metrics on their platforms. The Journal Impact Factor is the only metric displayed on all 12.
Publishers have led themselves into a mess by focusing on rising submissions as a positive indicator of journal performance. The time has come to close the floodgates and require that authors demonstrate their commitment to quality science before we let them in the door.
Today’s guest author offers a progress report on recent efforts to build open-source technology for open access book metrics.
This post explores why many Middle East- and North Africa-based journals remain underrepresented in global indexing databases, how this affects both local and international knowledge flows, and what alternative pathways can bring the region into fuller view.
A report from this year’s Fiesole Retreat: Learning from the Past, Informing the Future.
Bringing back a post from 2018, as funders increasingly demand measurements of “real world” impact from researchers. Does this steer us toward the same traps we’re already in from the ways we already do research assessment and is this short-term thinking problematic for the future of science?
Users (human and machine) are accessing scholarly content in new ways, challenging traditional usage analytics models. In this guest post, Tim Lloyd outlines the challenges ahead in quantifying usage.
In today’s Kitchen Essentials post, Alice Meadows interviews Tasha Mellins-Cohen, Executive Director of COUNTER Metrics (formerly Project COUNTER), which plays a critical role in enabling consistent usage metrics reporting.
A list of the most influential scientists suffers from anomalies and inaccuracies.
How many books do we read in a year? Wouldn’t a better question be how well, how thoughtfully we had engaged with long-form content?
Mary Miskin offers an interview with Prof. Dr. Liying Yang, Director of the Scientometrics and Research Assessment Unit at the National Science Library, Chinese Academy of Sciences, who manages the Early Warning List and the CAS Journal Ranking.
Looking at five ‘lines’ that the publishing industry has broadly agreed upon, but that now we are finding ourselves crossing.
A hackathon for the Financial Times Top 50 journals list is underway for those who want to shape how metrics are developed. An interview with Andrew Jack.
Sharing and evaluating early stage research findings can be challenging, but that’s starting to change. Learn more in this guest post by Sami Benchekroun and Michelle Kuepper of Morressier
The Altmetric “flower” is an icon, and the annual Top 100 list a much-anticipated event. But is the flower really a stalk?