If Thomson Reuters can calculate Impact Factors and Eigenfactors, why can’t they deliver a simple median score?
The idea of “reanalysis” needs to be rethought, if recent examples are any indication of what this trend could do to science.
“Big data” continues to draw attention, but will it ever amount to more than a hypothesis-generating engine and supplementary findings?
Data sharing and publication is a topic we need to consider carefully, and weigh the risks, costs, and benefits, as well as the complexities.
When novel, newsworthy results are discovered to be wrong, is that still news?
A new film series offers a chance to dance your way through statistical analysis.
Chef Phil Davis discusses the current state of the art in analysis of citation, usage, and other information sources, and some of the opportunities and challenges for bibliometrics in a data-rich era.
Do higher impact journals do a better job with their statistics? A study with a sexy title proves to be poorly designed and poorly reported.
Testing the hypothesis that editors are manipulating publication dates to increase their journal’s Impact Factor.
Two thought-provoking articles published last week in JAMA make compelling and complementary arguments to the rhetorical power of both numbers and words in conveying the message of science.
Simplifying the complex isn’t a simple task. A new book by a practiced hand and statistician proves entertaining and enlightening.
Is the Web making experts more susceptible to challenge? Is this a good thing for society as a whole? Or is it creating a confusion demagogues can exploit?
A recent article about statistics started a useful discussion in the blogosphere. And I was left wondering: Are open data dreams built on statistical sand?
Project COUNTER releases its third Code of Practice for the counting and reporting of usage data. Is COUNTER also promoting overconfidence in its products?