Guest Post: The Human Heart of Science — Navigating AI Anxiety in the Academic World
Today’s guest blogger calls for “rehumanizing” our view on AI innovations and their impacts on our mental health and our communities.
Today’s guest blogger calls for “rehumanizing” our view on AI innovations and their impacts on our mental health and our communities.
Today’s guest bloggers spotlight a gap in traditional usage reporting, third-party AI usage, and recommend steps needed to recover missing usage data.
How are two competing neuroscience journals faring since the editorial board of one departed to create the other?
Today’s guest blogger calls for adding “understandable” to the FAIR data principles, to ensure we do not surrender human knowledge in our rush for automation.
Today’s guest bloggers assert that the future of the scholarly publishing depends on mastering science communication with the same rigor that global consumer brands apply to marketing.
AI-driven zero-click search is widening the gap between visibility and usage, threatening publisher revenue, research integrity, and trust. How should we respond?
Current AI disclosure guidelines are failing and driving AI use underground rather than making it transparent. In this follow-up post, I turn to the more challenging question: what publishers should do about it.
Robert Harington attempts to shine a light on some of the political problems scholarly societies and academic institutions face in the current political climate.
Today’s post calls for community feedback on STM’s latest recommendations for alt-text metadata to support images in accessible scholarly publishing.
Today’s post paves a clear path forward in making AI work for publishers in the brave new agentic world.
Is open scholarship an honest signal of researcher integrity? We present preliminary evidence that data and code sharing, preprinting, and other open behaviors are indeed less common in papermill articles.
Only a negligible percentage of authors seem to actually be disclosing their AI use. Here’s why I think that’s the case.
Today’s guest bloggers reflect on the experience of “imposter syndrome” and how we might adopt a new approach to moments of uncertainty and change.
Today’s guest post features an interview with William Gunn discussing how AI will (or won’t!) change the future of reference management tools.
Today’s guest author raises the question of whether a researcher submitting an article that was significantly drafted by an LLM without clear disclosure is effectively engaging in a contemporary form of ghost authorship.