How many articles from predatory journals are being cited in the legitimate (especially medical) literature? Some disturbing findings.
The recent editorial board defection from an Elsevier journal brings up issues raised in Todd Carpenter’s 2013 post on editorial boycotts and declarations of independence. They generate a lot of heat, but what do the data say about the actual success of the new journals compared to the journals that were overthrown.
There is no shortage of critique of citation metrics and other efforts to quantify the “impact” of scholarship. Will a report calling for “responsible metrics” help researchers, administrators and funders finally wean themselves?
Citation practices vary between and within STM and HSS; they also vary by discipline and within disciplines. Though citation metrics presume evidence of “impact,” in fact a citation may represent a range of intentions. Given the emphasis on citation indices, isn’t it important to query what scholars are actually doing when they cite another scholar’s work?
Scholars are citing an increasingly aging collection of scholarship. Does this reflect the growing ease with accessing the literature, or a structural shift in the way science is funded–and the way scientists are rewarded?
The majority of time spent in editing and formatting citations in the publication process is time wasted. We now have in place nearly all the components to use persistent identifiers, linked metadata, and style sheets to improve how citations can be structured and processed. Using these tools can significantly improve the accuracy of references and reduce the time editors spend on this production function. Even when automated, we bounce between linked metadata, then to text, then to metadata again.
A trend toward shaming journals that promote their impact factors needs to be rolled back. Impact factors are journal metrics. It’s the other uses that need to be curtailed.
Editorial boycotts and declarations of independence generate a lot of heat, but what do the data say about the actual success of the new journals compared to the journals that were overthrown.
Expert ratings have poorer predictive power than journal citation metrics, study reveals.
Editors have learned how to exploit a simple loophole in the calculation of the Impact Factor. Is it time to close that loophole?
Making sense of non-events (citation, circulation, and publication) requires context and a tolerance for uncertainty.
A new paper demonstrates how easy it is to game Google Scholar citations, and how the system resists correction.
As new metrics are explored, not everything equates to “impact.” Getting our terms right will help us get our thinking straight.
An attempt to entice citations from authors leads to a memorable story for the holidays.
The rankings of journals based on F1000 scores reveals a strong bias against larger journals and those with little disciplinary overlap with the biosciences.