bioRxiv and Citations: Just Another Piece of Flawed Bibliometric Research?
Even a flawed paper can offer lessons on how (not) to report, and what (not) to claim.
Even a flawed paper can offer lessons on how (not) to report, and what (not) to claim.
Christos Petrou presents evidence suggesting that growth in retractions has not been universal across regions and subject areas, and it is primarily driven by the industrial-scale activity of papermills (rather than the activity of individual researchers) and the growth of research from China.
Christos Petrou looks at the factors that go into determining a journal’s turnaround times, and how we can help authors make better-informed choices.
Hélène Draux presents the first of a two-part effort to chart the topography of mental health scholarship. Here, established methods, including pre-existing classifications are employed.
We all know the journals market has rapidly consolidated over recent years. But where’s the data? I set out to find some numbers to put behind the common sense.
A new conference explores ways research can turn the scientific method onto improving its own results.
Can Clarivate deliver on a single, normalized measurement of citation impact or did its marketing department promise too much?
Thoughts on Elsevier’s acquisition of Plum Analytics.
Elsevier’s new CiteScore service is a carefully thought-out element in the company’s competitive strategy, but it reinforces the widespread error that bibliometrics can be use as proxies for the quality of a publication.
Citation networks can provide much more than journal metrics and rankings. Publishers should look to them for competitive intelligence.
Peer-to-peer sharing of scientific articles is common for Indian scientists, a new study reports.
Indexing of proceedings papers, errors in conversion, draw ire from bibliometrics community. Some question its effect on journal Impact Factors.