Does Altering A Dataset Merit Retraction?
Self-archiving on personal sites is perfectly permitted under many journal data policies. But what happens when an author alters the underlying data?
Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist. https://phil-davis.com/
Self-archiving on personal sites is perfectly permitted under many journal data policies. But what happens when an author alters the underlying data?
Even a flawed paper can offer lessons on how (not) to report, and what (not) to claim.
After making up a false claim about a nonexistent study done by the AAAS, the AI software admitted that it made a mistake and then apologized.
Editors at The BMJ are lousy at predicting the citation performance of research papers. Or are they?
Twitter does not increase citations, a reanalysis of author data shows. Did the authors p-hack their data?
When a reputable journal refuses to get involved with a questionable paper, science looks less like a self-correcting enterprise and more like a way to amass media attention.
Article Attention Scores for papers don’t seem to add up, leading one to question whether Altmetric data are valid, reliable, and reproducible.
Can Clarivate deliver on a single, normalized measurement of citation impact or did its marketing department promise too much?
Do Sci-Hub downloads cause more citations, or are high impact papers simply downloaded more often?
Some journals are expected to benefit immensely under Clarivate’s new counting model.
Starting 2021, Journal Impact Factors will be calcuated using online publication dates, not print ones. But phased roll-out may lead to bias for some journals.
We stand by our data. We just won’t share it or believe that you replicated our study.
Scientific authorship comes with benefits, but also responsibilities. If authors are unwilling to explain their work, editors must step up to defend their journal.
A paper linking tweets and citations comes under attack, but more from the authors’ inability to answer even basic questions about their paper and resistance to share their data.
A reanalysis of TrendMD experimental data reveal details on its effectiveness, novelty, and bias.