Clean, data rich, and intuitive, forest plots can be used to visualize publication metrics.
Criticisms of altmetrics often seem to be equally applicable to other forms of research assessment, like the Impact Factor. Phill Jones suggests that is not because of a fundamental opposition to altmetrics but a fear that it will suffer the same pitfalls. The solution is to engage more with a somewhat neglected set of stakeholders; Informaticians.
There is no shortage of critique of citation metrics and other efforts to quantify the “impact” of scholarship. Will a report calling for “responsible metrics” help researchers, administrators and funders finally wean themselves?
Citation practices vary between and within STM and HSS; they also vary by discipline and within disciplines. Though citation metrics presume evidence of “impact,” in fact a citation may represent a range of intentions. Given the emphasis on citation indices, isn’t it important to query what scholars are actually doing when they cite another scholar’s work?
Scholars are citing an increasingly aging collection of scholarship. Does this reflect the growing ease with accessing the literature, or a structural shift in the way science is funded–and the way scientists are rewarded?
Scholars are citing proportionally more older material, a new Google paper reports. Digital publishing and delivery, and better search engines can only explain part of the trend. Something much bigger is taking place.
A trend toward shaming journals that promote their impact factors needs to be rolled back. Impact factors are journal metrics. It’s the other uses that need to be curtailed.
The lack of an Impact Factor is one reason that new journals have difficulty attracting submissions. Some journals, such as eLife and Cell Reports, qualify for an Impact Factor based on partial data. This post explores how that happens.
Why can’t researchers agree on whether Open Access is the cause of more citations or merely associated with better performing papers? The answer is in the methods.
Yesterday saw the release of the 2013 Impact Factors for scholarly journals. We present a look back at some favorite posts examining the Impact Factor.
Should attention metrics play any role whatsoever in researcher assessment?
If we were to build a citation reporting system today, what would it look like? In this post, I propose a solution that would do away with a separate Journal Citation Report (JCR) and propose a suite of services built around the Web of Science, directed to the needs of journal editors and publishers.
Framing “altmetrics” as alternative may limit their potential — they have to be “alternative” to something already in existence. How do we move new measures robustly into the mainstream?