Did ChatGPT Just Lie To Me?
After making up a false claim about a nonexistent study done by the AAAS, the AI software admitted that it made a mistake and then apologized.
After making up a false claim about a nonexistent study done by the AAAS, the AI software admitted that it made a mistake and then apologized.
Rachel Helps, the Wikipedian-in-residence at the BYU libraries discusses the intersection of scholarly journals and Wikipedia.
When a reputable journal refuses to get involved with a questionable paper, science looks less like a self-correcting enterprise and more like a way to amass media attention.
As more publishers semantically enrich documents, Todd Carpenter considers whether links are the same as citations
Do Sci-Hub downloads cause more citations, or are high impact papers simply downloaded more often?
We stand by our data. We just won’t share it or believe that you replicated our study.
Scientific authorship comes with benefits, but also responsibilities. If authors are unwilling to explain their work, editors must step up to defend their journal.
A paper linking tweets and citations comes under attack, but more from the authors’ inability to answer even basic questions about their paper and resistance to share their data.
Sharing and evaluating early stage research findings can be challenging, but that’s starting to change. Learn more in this guest post by Sami Benchekroun and Michelle Kuepper of Morressier
Why do authors continue to cite preprints years after they’ve been formally published?
Last week’s Transforming Research conference in Baltimore, MD, gathered a range of speakers across the academic and professional spectrum. Charlie Rapple highlights some of the new research that was shared, and draws out some of the prevalent themes.
Citations and the metrics around their use are an important part of evaluation and promotion of science and scientists and yet, little attention is paid to them in the peer review process. In this post, Angela Cochran makes a call to critically review reference lists and develop standards around what should and should not be included.
As an alternative to the Journal Impact Factor, editors propose an index that measures highly cited papers.
A proposal to substitute graphs of citation distributions for impact factors introduces many problems the authors don’t seem to have fully grasped, including unintentionally bolstering the importance of the very metric they seek to diminish.
If Thomson Reuters can calculate Impact Factors and Eigenfactors, why can’t they deliver a simple median score?