bioRxiv and Citations: Just Another Piece of Flawed Bibliometric Research?
Even a flawed paper can offer lessons on how (not) to report, and what (not) to claim.
Even a flawed paper can offer lessons on how (not) to report, and what (not) to claim.
Citing chatbots as information sources offer little in terms of promoting smart use of generative AI and could also be damaging.
If you use a chatbot in writing a text, and are discouraged from listing it as a coauthor, should you attribute the relevant passages to the tool via citation instead? Is it appropriate to cite chatbots as information sources?
How many articles from predatory journals are being cited in the legitimate (especially medical) literature? Some disturbing findings.
The recent editorial board defection from an Elsevier journal brings up issues raised in Todd Carpenter’s 2013 post on editorial boycotts and declarations of independence. They generate a lot of heat, but what do the data say about the actual success of the new journals compared to the journals that were overthrown.
There is no shortage of critique of citation metrics and other efforts to quantify the “impact” of scholarship. Will a report calling for “responsible metrics” help researchers, administrators and funders finally wean themselves?
Citation practices vary between and within STM and HSS; they also vary by discipline and within disciplines. Though citation metrics presume evidence of “impact,” in fact a citation may represent a range of intentions. Given the emphasis on citation indices, isn’t it important to query what scholars are actually doing when they cite another scholar’s work?
Scholars are citing an increasingly aging collection of scholarship. Does this reflect the growing ease with accessing the literature, or a structural shift in the way science is funded–and the way scientists are rewarded?
The majority of time spent in editing and formatting citations in the publication process is time wasted. We now have in place nearly all the components to use persistent identifiers, linked metadata, and style sheets to improve how citations can be structured and processed. Using these tools can significantly improve the accuracy of references and reduce the time editors spend on this production function. Even when automated, we bounce between linked metadata, then to text, then to metadata again.
A trend toward shaming journals that promote their impact factors needs to be rolled back. Impact factors are journal metrics. It’s the other uses that need to be curtailed.
Editorial boycotts and declarations of independence generate a lot of heat, but what do the data say about the actual success of the new journals compared to the journals that were overthrown.
Expert ratings have poorer predictive power than journal citation metrics, study reveals.
Editors have learned how to exploit a simple loophole in the calculation of the Impact Factor. Is it time to close that loophole?
Making sense of non-events (citation, circulation, and publication) requires context and a tolerance for uncertainty.
A new paper demonstrates how easy it is to game Google Scholar citations, and how the system resists correction.