Guest Post: The Human Heart of Science — Navigating AI Anxiety in the Academic World
Today’s guest blogger calls for “rehumanizing” our view on AI innovations and their impacts on our mental health and our communities.
Today’s guest blogger calls for “rehumanizing” our view on AI innovations and their impacts on our mental health and our communities.
Current AI disclosure guidelines are failing and driving AI use underground rather than making it transparent. In this follow-up post, I turn to the more challenging question: what publishers should do about it.
Only a negligible percentage of authors seem to actually be disclosing their AI use. Here’s why I think that’s the case.
Today’s guest bloggers reflect on the experience of “imposter syndrome” and how we might adopt a new approach to moments of uncertainty and change.
Today’s guest blogger reflect on their panel discussion about policies and realities of AI in scholarly communications at COPE’s Publication Integrity Week event last month.
Today’s guest bloggers advocate for marketing strategy using localization, which brings cultural fluency, awareness, and authenticity to our communication with partners around the world.
The first of SSP’s new polling initiative, Pulse Check, explores AI in scholarly publishing and set out to understand how our communities are navigating this monumental shift.
To close out 2025, we asked the Chefs: What would you ask for from Academic Publishing Santa?
At the STM innovation and Integrity days in London last week, it’s clear that research integrity has become an increasingly pressing issue. Many publishers are reporting significant increases in submissions of questionable legitimacy. perhaps now is the time for a new alliance between publishers, funders, institutions and researchers to protect the integrity of the scholarly record, before it’s too late.
Academic publishing ia reaching a breaking point. Unless we redesign it, we risk stalling the very progress we seek – with consequences impacting research, education and public trust in academia.
Rather than just bolting on AI to existing publication workflows,there is a real opportunity to rethink and redesign them for human–AI collaboration. Some thoughts on what that looks like in practice.
Publishers have led themselves into a mess by focusing on rising submissions as a positive indicator of journal performance. The time has come to close the floodgates and require that authors demonstrate their commitment to quality science before we let them in the door.
Nearly three years after ChatGPT’s debut, generative AI continues to reshape scholarly publishing. The sector has moved from experimentation toward integration, with advances in ethical writing tools, AI-driven discovery, summarization, and automated peer review. While workflows are becoming more efficient, the long-term impact on research creation and evaluation remains uncertain.
If science is to be both honest and healthy, we must accept that statistically non-significant results are part of reality. The SAMPL guidelines, if adopted widely by scholarly publishers and journal editors, hold a solution for authors who worry their results are not “significant.”
Tony Alves reflects on the 2025 Peer Review Congress and the rapid evolution of discussions about AI and peer review since 2022.