Guest Post — A Smarter Way to License Research Articles for AI
If LLMs are the future of information discovery, valuable scholarly content risks being left behind — unless we build a bridge with better licensing.
If LLMs are the future of information discovery, valuable scholarly content risks being left behind — unless we build a bridge with better licensing.
Bringing back a post from 2018, as funders increasingly demand measurements of “real world” impact from researchers. Does this steer us toward the same traps we’re already in from the ways we already do research assessment and is this short-term thinking problematic for the future of science?
After making up a false claim about a nonexistent study done by the AAAS, the AI software admitted that it made a mistake and then apologized.
Rachel Helps, the Wikipedian-in-residence at the BYU libraries discusses the intersection of scholarly journals and Wikipedia.
When a reputable journal refuses to get involved with a questionable paper, science looks less like a self-correcting enterprise and more like a way to amass media attention.
As more publishers semantically enrich documents, Todd Carpenter considers whether links are the same as citations
Do Sci-Hub downloads cause more citations, or are high impact papers simply downloaded more often?
We stand by our data. We just won’t share it or believe that you replicated our study.
Scientific authorship comes with benefits, but also responsibilities. If authors are unwilling to explain their work, editors must step up to defend their journal.
A paper linking tweets and citations comes under attack, but more from the authors’ inability to answer even basic questions about their paper and resistance to share their data.
Sharing and evaluating early stage research findings can be challenging, but that’s starting to change. Learn more in this guest post by Sami Benchekroun and Michelle Kuepper of Morressier
Why do authors continue to cite preprints years after they’ve been formally published?
Last week’s Transforming Research conference in Baltimore, MD, gathered a range of speakers across the academic and professional spectrum. Charlie Rapple highlights some of the new research that was shared, and draws out some of the prevalent themes.
Citations and the metrics around their use are an important part of evaluation and promotion of science and scientists and yet, little attention is paid to them in the peer review process. In this post, Angela Cochran makes a call to critically review reference lists and develop standards around what should and should not be included.
As an alternative to the Journal Impact Factor, editors propose an index that measures highly cited papers.