When the Scoreboard Becomes the Game, It’s Time to Recalibrate Research Metrics
Today’s post discusses research metrics and their relationship to research integrity, inclusivity, and long-term impact.
Today’s post discusses research metrics and their relationship to research integrity, inclusivity, and long-term impact.
This is the third and final article in a guest series reflecting on the main themes and ideas gathered and discussed at The Munin Conference at the end of 2024. Today’s focus is measuring impact.
Bibliometric databases are essential tools for research and publishing strategy. But the variability in how they parse publisher metadata and their constant evolution makes it difficult, if not impossible, to exactly reproduce any given piece of research.
Mary Miskin offers an interview with Prof. Dr. Liying Yang, Director of the Scientometrics and Research Assessment Unit at the National Science Library, Chinese Academy of Sciences, who manages the Early Warning List and the CAS Journal Ranking.
Looking at five ‘lines’ that the publishing industry has broadly agreed upon, but that now we are finding ourselves crossing.
Journal-level impact feeds academic impact, which in turn feeds broader impacts potential
Christos Petrou takes a look at the Guest Editor model for publishing and its recent impact on Hindawi and MDPI, as Clarivate has delisted some of their journals.
We revisit our analysis of how adopting a strict data policy affects journal submissions and find that the effects depend a lot on Impact Factor trends
Alison Mudditt looks at the recently released TOP Factor from the Center for Open Science, and the bigger picture of shifting the nature of research assessment.
Today’s guest post, by Anita Bandrowski and Martijn Roelandse, highlights some of the challenges – and opportunities – of evaluating the quality of research rather than its impact.
Why do authors continue to cite preprints years after they’ve been formally published?
Hoping to woo authors away from commercial publishers, a group of biomedical science societies have launched a new alliance to promote the value of publishing in society journals.
Citations and the metrics around their use are an important part of evaluation and promotion of science and scientists and yet, little attention is paid to them in the peer review process. In this post, Angela Cochran makes a call to critically review reference lists and develop standards around what should and should not be included.
Citation indexes need to provide standardized citation histograms for editors and publishers. Without them, it is unlikely that they will be widely adopted. At worse, it will encourage the production of histograms that selectively highlight or obscure the data.
A proposal to substitute graphs of citation distributions for impact factors introduces many problems the authors don’t seem to have fully grasped, including unintentionally bolstering the importance of the very metric they seek to diminish.