Can Scopus Deliver A Better Journal Impact Metric?
While offering real improvements over Thomson Reuters, Scopus may be suffering from serious data integrity issues and communication problems with its third-party publishers.
While offering real improvements over Thomson Reuters, Scopus may be suffering from serious data integrity issues and communication problems with its third-party publishers.
Thomson Reuters’ approach of indexing by journal section and revising by demand leads to great inconsistencies across journals and inflates the Impact Factors of elite journals. The solution: remove the human element.
How a shrinking journals receives an artificial boost to its leading citation indicator.
Can PLOS exist without a mega-journal?
The recent editorial board defection from an Elsevier journal brings up issues raised in Todd Carpenter’s 2013 post on editorial boycotts and declarations of independence. They generate a lot of heat, but what do the data say about the actual success of the new journals compared to the journals that were overthrown.
Clean, data rich, and intuitive, forest plots can be used to visualize publication metrics.
There is no shortage of critique of citation metrics and other efforts to quantify the “impact” of scholarship. Will a report calling for “responsible metrics” help researchers, administrators and funders finally wean themselves?
Can network-based metrics allow us to separate true scientific influence from mere popularity?
Citation practices vary between and within STM and HSS; they also vary by discipline and within disciplines. Though citation metrics presume evidence of “impact,” in fact a citation may represent a range of intentions. Given the emphasis on citation indices, isn’t it important to query what scholars are actually doing when they cite another scholar’s work?
If the Internet created a burgeoning market of cheap academic journal knockoffs, should we be surprised to witness new knockoff ratings companies?
Is there (ever) a good time to overhaul a publishers’ production system? If you care about your journals’ Impact Factor, the answer is “yes.”
This year, Thomson Reuters suspended six business journals for engaging in a citation cartel. Should the authors be held responsible for the malfeasance of their editors? We propose a new solution to punishing the community for the poor decisions of the few.
A trend toward shaming journals that promote their impact factors needs to be rolled back. Impact factors are journal metrics. It’s the other uses that need to be curtailed.
The lack of an Impact Factor is one reason that new journals have difficulty attracting submissions. Some journals, such as eLife and Cell Reports, qualify for an Impact Factor based on partial data. This post explores how that happens.
Attempts to use new measurements to more finely predict or represent journal quality are bound to falter because of some qualities inherent to journals themselves.