The real innovation of CiteScore is not another performance metric, but a new marketing model focused on editors.
A new book reviews various instances of piracy in the media industry and proposes using Big Data analyses as a means to manage it.
Researchers may publish their best work at any point in their careers, a new study reports. This is not the same as success being the result of random forces or just plain “dumb luck.”
Citation network maps may indicate when gaming is taking place. Proving intention is a different story.
Citation indexes need to provide standardized citation histograms for editors and publishers. Without them, it is unlikely that they will be widely adopted. At worse, it will encourage the production of histograms that selectively highlight or obscure the data.
A proposal to substitute graphs of citation distributions for impact factors introduces many problems the authors don’t seem to have fully grasped, including unintentionally bolstering the importance of the very metric they seek to diminish.
Citation networks can provide much more than journal metrics and rankings. Publishers should look to them for competitive intelligence.
If Thomson Reuters can calculate Impact Factors and Eigenfactors, why can’t they deliver a simple median score?
Thomson Reuters’ approach of indexing by journal section and revising by demand leads to great inconsistencies across journals and inflates the Impact Factors of elite journals. The solution: remove the human element.
Charlie Rapple reports on the 2:AM conference, which celebrated five years of altmetrics and considered what we should aspire to achieve in the next five years
Clean, data rich, and intuitive, forest plots can be used to visualize publication metrics.
Criticisms of altmetrics often seem to be equally applicable to other forms of research assessment, like the Impact Factor. Phill Jones suggests that is not because of a fundamental opposition to altmetrics but a fear that it will suffer the same pitfalls. The solution is to engage more with a somewhat neglected set of stakeholders; Informaticians.
There is no shortage of critique of citation metrics and other efforts to quantify the “impact” of scholarship. Will a report calling for “responsible metrics” help researchers, administrators and funders finally wean themselves?
Citation practices vary between and within STM and HSS; they also vary by discipline and within disciplines. Though citation metrics presume evidence of “impact,” in fact a citation may represent a range of intentions. Given the emphasis on citation indices, isn’t it important to query what scholars are actually doing when they cite another scholar’s work?
Scholars are citing an increasingly aging collection of scholarship. Does this reflect the growing ease with accessing the literature, or a structural shift in the way science is funded–and the way scientists are rewarded?