Guest Post — Funding Research Services: How Libraries are Exploring Cost Recovery Models
Today’s guest bloggers share results of an exploratory survey of funding research services, offering a snapshot of a library community in transition.
Today’s guest bloggers share results of an exploratory survey of funding research services, offering a snapshot of a library community in transition.
Rather than just bolting on AI to existing publication workflows,there is a real opportunity to rethink and redesign them for human–AI collaboration. Some thoughts on what that looks like in practice.
Publishers have led themselves into a mess by focusing on rising submissions as a positive indicator of journal performance. The time has come to close the floodgates and require that authors demonstrate their commitment to quality science before we let them in the door.
Nearly three years after ChatGPT’s debut, generative AI continues to reshape scholarly publishing. The sector has moved from experimentation toward integration, with advances in ethical writing tools, AI-driven discovery, summarization, and automated peer review. While workflows are becoming more efficient, the long-term impact on research creation and evaluation remains uncertain.
If science is to be both honest and healthy, we must accept that statistically non-significant results are part of reality. The SAMPL guidelines, if adopted widely by scholarly publishers and journal editors, hold a solution for authors who worry their results are not “significant.”
We’re finally seeing a move to truly digital-first publishing systems and in today’s post Alice Meadows interviews Liz Ferguson of Wiley about this transition, including their own Research Exchange platform.
Today, we talk to thought leaders Helen King and Chris Leonard, who offer a nuanced look at how peer review might adapt, fracture, or reinvent itself in the AI era.
The future of peer review isn’t about choosing between humans and AI, or between speed and quality, but about combining the strengths of both to enable speed with quality, to ensure quality, ethics, and trust in the scholarly record.
Summing up the Committee on Publication Ethics (COPE) Forum discussion on Emerging AI Dilemmas in Scholarly Publishing, which explored the many challenges AI presents for the scholarly community.
A scholarly disinformation taxonomy could help prevent scholarly communications from being gamed by fraudulent actors.
Robert Harington talks to Carsten Buhr, CEO of De Gruyter Brill, in this series of perspectives from some of Publishing’s leaders across the non-profit and for-profit sectors of our industry.
Robert Harington digs into the world of preprints. He uses the field of mathematics to explore how an inclusive view of preprints and published articles leads to a research ecosystem that is greater than the sum of the parts.
Libraries and publishers represent the interests of thousands of authors, readers, scientists, researchers, students, and lifelong learners. Today, we stand united to face the mounting risks to public trust and the social benefit that research delivers.
Christos Petrou examines the rapid growth in publication volume coming from China, and how that is impacting the publishing industry.
A summary of the European Association of Science Editors (EASE) debate session, where Haseeb Irfanullah argued in favor of a motion declaring that journal editors do not need to worry about preventing the spread of misinformation, while Are Brean argued against it.