Citation practices vary between and within STM and HSS; they also vary by discipline and within disciplines. Though citation metrics presume evidence of “impact,” in fact a citation may represent a range of intentions. Given the emphasis on citation indices, isn’t it important to query what scholars are actually doing when they cite another scholar’s work?
If the Internet created a burgeoning market of cheap academic journal knockoffs, should we be surprised to witness new knockoff ratings companies?
Is there (ever) a good time to overhaul a publishers’ production system? If you care about your journals’ Impact Factor, the answer is “yes.”
This year, Thomson Reuters suspended six business journals for engaging in a citation cartel. Should the authors be held responsible for the malfeasance of their editors? We propose a new solution to punishing the community for the poor decisions of the few.
A trend toward shaming journals that promote their impact factors needs to be rolled back. Impact factors are journal metrics. It’s the other uses that need to be curtailed.
The lack of an Impact Factor is one reason that new journals have difficulty attracting submissions. Some journals, such as eLife and Cell Reports, qualify for an Impact Factor based on partial data. This post explores how that happens.
Attempts to use new measurements to more finely predict or represent journal quality are bound to falter because of some qualities inherent to journals themselves.
Thomson Reuters launched a new platform called InCites last week. The platform combines Journal Citation Reports with the Essential Science Indicators. In this Q&A, Patricia Brennan from Thomson Reuters describes the new platform and new additions that answer concerns from critics.
Yesterday saw the release of the 2013 Impact Factors for scholarly journals. We present a look back at some favorite posts examining the Impact Factor.
NISO has released the results of their year long study of Altmetrics in draft form for comment.
Publication output for the largest journal in science continues to fall, just not as fast as leading indicators would predict.
This week marks the golden anniversary of the Science Citation Index, introduced by Eugene Garfield in 1964.
Should attention metrics play any role whatsoever in researcher assessment?
If we were to build a citation reporting system today, what would it look like? In this post, I propose a solution that would do away with a separate Journal Citation Report (JCR) and propose a suite of services built around the Web of Science, directed to the needs of journal editors and publishers.
EBSCO has recently acquired altmetrics startup Plum Analytics. What will this mean for both companies and altmetrics in general?