The real innovation of CiteScore is not another performance metric, but a new marketing model focused on editors.
Can network-based metrics allow us to separate true scientific influence from mere popularity?
Attempts to use new measurements to more finely predict or represent journal quality are bound to falter because of some qualities inherent to journals themselves.
Putting metrics and altmetrics into perspective can help us separate secondary signals from primary signals, and may lead to a greater appreciation of alternatives to metrics, or alt2metrics.
The rankings of journals based on F1000 scores reveals a strong bias against larger journals and those with little disciplinary overlap with the biosciences.
The outer ring of citation remains a point of vulnerability for quality proxies, as does reducing complex things to simple lists or numbers. When will we learn?
Does the Principle of Repeated Improvement Result in Better Journal Impact Estimates than Raw Citation Counts?