Scholars are citing proportionally more older material, a new Google paper reports. Digital publishing and delivery, and better search engines can only explain part of the trend. Something much bigger is taking place.
A trend toward shaming journals that promote their impact factors needs to be rolled back. Impact factors are journal metrics. It’s the other uses that need to be curtailed.
The lack of an Impact Factor is one reason that new journals have difficulty attracting submissions. Some journals, such as eLife and Cell Reports, qualify for an Impact Factor based on partial data. This post explores how that happens.
Why can’t researchers agree on whether Open Access is the cause of more citations or merely associated with better performing papers? The answer is in the methods.
Yesterday saw the release of the 2013 Impact Factors for scholarly journals. We present a look back at some favorite posts examining the Impact Factor.
Should attention metrics play any role whatsoever in researcher assessment?
If we were to build a citation reporting system today, what would it look like? In this post, I propose a solution that would do away with a separate Journal Citation Report (JCR) and propose a suite of services built around the Web of Science, directed to the needs of journal editors and publishers.
Framing “altmetrics” as alternative may limit their potential — they have to be “alternative” to something already in existence. How do we move new measures robustly into the mainstream?