Guest Post — AI as Reader, Author, and Reviewer: What Stays Human?
Today’s guest blogger shares highlights from a recent panel at the New Directions Seminar that concluded AI is simultaneously the largest challenge and the largest opportunity.
Today’s guest blogger shares highlights from a recent panel at the New Directions Seminar that concluded AI is simultaneously the largest challenge and the largest opportunity.
Publishers have led themselves into a mess by focusing on rising submissions as a positive indicator of journal performance. The time has come to close the floodgates and require that authors demonstrate their commitment to quality science before we let them in the door.
Creative Commons (CC) licenses expand, not restrict, the permissible uses of copyrighted works.
For decades, EAL researchers have faced systemic disadvantages in publishing. AI writing tools promise relief, yet, they also bring new risks into science.
Nearly three years after ChatGPT’s debut, generative AI continues to reshape scholarly publishing. The sector has moved from experimentation toward integration, with advances in ethical writing tools, AI-driven discovery, summarization, and automated peer review. While workflows are becoming more efficient, the long-term impact on research creation and evaluation remains uncertain.
Today, we speak with Prof. Yana Suchikova about GAIDeT, the Generative AI Delegation Taxonomy, which enables researchers to disclose the use of generative AI in an honest and transparent way.
AI has opened a new chapter in the saga of science and peer review. Today, guest author Prof. Nihar B. Shah explains how, if guided with integrity, AI can open galaxies of possibilities.
To kick off Peer Review Week, we asked the Chefs, What’s a bold experiment with AI in peer review you’d like to see tested?
As AI becomes a major consumer of research, scholarly publishing must evolve: from PDFs for people to structured, high-quality data for machines.
The MIT Press surveyed book authors on attitudes towards LLM training practices. In Part 2 of this 2 part post, we discuss recommendations for stakeholders to avoid unintended harms and preserve core scientific and academic values.
The MIT Press surveyed book authors on attitudes towards LLM training practices. In Part 1 of this 2 part post, we discuss the results: authors are not opposed to generative AI per se, but they are strongly opposed to unregulated, extractive practices and worry about the long-term impacts of unbridled generative AI development on the scholarly and scientific enterprise.
If LLMs are the future of information discovery, valuable scholarly content risks being left behind — unless we build a bridge with better licensing.
Guest blogger Hema Thakur shares results of her experiment using AI to improve the accessibility of peer review feedback — her findings may concern you!
Robert Harington talks to Matt Kissner, CEO of Wiley, in this series of perspectives from some of Publishing’s leaders across the non-profit and for-profit sectors of our industry.
We asked the Chefs for their thoughts on two important court decisions on the legality of using copyrighted materials for AI training.