A proposal to substitute graphs of citation distributions for impact factors introduces many problems the authors don’t seem to have fully grasped, including unintentionally bolstering the importance of the very metric they seek to diminish.
Citation networks can provide much more than journal metrics and rankings. Publishers should look to them for competitive intelligence.
If Thomson Reuters can calculate Impact Factors and Eigenfactors, why can’t they deliver a simple median score?
Thomson Reuters’ approach of indexing by journal section and revising by demand leads to great inconsistencies across journals and inflates the Impact Factors of elite journals. The solution: remove the human element.
Charlie Rapple reports on the 2:AM conference, which celebrated five years of altmetrics and considered what we should aspire to achieve in the next five years
Clean, data rich, and intuitive, forest plots can be used to visualize publication metrics.
Criticisms of altmetrics often seem to be equally applicable to other forms of research assessment, like the Impact Factor. Phill Jones suggests that is not because of a fundamental opposition to altmetrics but a fear that it will suffer the same pitfalls. The solution is to engage more with a somewhat neglected set of stakeholders; Informaticians.
There is no shortage of critique of citation metrics and other efforts to quantify the “impact” of scholarship. Will a report calling for “responsible metrics” help researchers, administrators and funders finally wean themselves?
Citation practices vary between and within STM and HSS; they also vary by discipline and within disciplines. Though citation metrics presume evidence of “impact,” in fact a citation may represent a range of intentions. Given the emphasis on citation indices, isn’t it important to query what scholars are actually doing when they cite another scholar’s work?
Scholars are citing an increasingly aging collection of scholarship. Does this reflect the growing ease with accessing the literature, or a structural shift in the way science is funded–and the way scientists are rewarded?
Scholars are citing proportionally more older material, a new Google paper reports. Digital publishing and delivery, and better search engines can only explain part of the trend. Something much bigger is taking place.
A trend toward shaming journals that promote their impact factors needs to be rolled back. Impact factors are journal metrics. It’s the other uses that need to be curtailed.
The lack of an Impact Factor is one reason that new journals have difficulty attracting submissions. Some journals, such as eLife and Cell Reports, qualify for an Impact Factor based on partial data. This post explores how that happens.
Why can’t researchers agree on whether Open Access is the cause of more citations or merely associated with better performing papers? The answer is in the methods.
Yesterday saw the release of the 2013 Impact Factors for scholarly journals. We present a look back at some favorite posts examining the Impact Factor.