Why do authors continue to cite preprints years after they’ve been formally published?
Hoping to woo authors away from commercial publishers, a group of biomedical science societies have launched a new alliance to promote the value of publishing in society journals.
Citations and the metrics around their use are an important part of evaluation and promotion of science and scientists and yet, little attention is paid to them in the peer review process. In this post, Angela Cochran makes a call to critically review reference lists and develop standards around what should and should not be included.
Citation indexes need to provide standardized citation histograms for editors and publishers. Without them, it is unlikely that they will be widely adopted. At worse, it will encourage the production of histograms that selectively highlight or obscure the data.
A proposal to substitute graphs of citation distributions for impact factors introduces many problems the authors don’t seem to have fully grasped, including unintentionally bolstering the importance of the very metric they seek to diminish.
After many and long conversations among colleagues within and beyond the Scholarly Kitchen
about what researchers need to know about scholarly publishing, Alice Meadows and Karin Wulf compiled a list of what we think to be the most urgent issues.
An official response from Wim Meester, Head of Content Strategy for Scopus.
While offering real improvements over Thomson Reuters, Scopus may be suffering from serious data integrity issues and communication problems with its third-party publishers.
Thomson Reuters’ approach of indexing by journal section and revising by demand leads to great inconsistencies across journals and inflates the Impact Factors of elite journals. The solution: remove the human element.
How a shrinking journals receives an artificial boost to its leading citation indicator.
Can PLOS exist without a mega-journal?
The recent editorial board defection from an Elsevier journal brings up issues raised in Todd Carpenter’s 2013 post on editorial boycotts and declarations of independence. They generate a lot of heat, but what do the data say about the actual success of the new journals compared to the journals that were overthrown.
Clean, data rich, and intuitive, forest plots can be used to visualize publication metrics.
There is no shortage of critique of citation metrics and other efforts to quantify the “impact” of scholarship. Will a report calling for “responsible metrics” help researchers, administrators and funders finally wean themselves?
Can network-based metrics allow us to separate true scientific influence from mere popularity?