Guest Post – ODI Survey on AI and Web-Scale Discovery
NISO’s Open Discovery Initiative (ODI) survey reflects the positive and negative expectations of generative AI in web-scale discovery tools.
NISO’s Open Discovery Initiative (ODI) survey reflects the positive and negative expectations of generative AI in web-scale discovery tools.
The MIT Press surveyed book authors on attitudes towards LLM training practices. In Part 2 of this 2 part post, we discuss recommendations for stakeholders to avoid unintended harms and preserve core scientific and academic values.
The MIT Press surveyed book authors on attitudes towards LLM training practices. In Part 1 of this 2 part post, we discuss the results: authors are not opposed to generative AI per se, but they are strongly opposed to unregulated, extractive practices and worry about the long-term impacts of unbridled generative AI development on the scholarly and scientific enterprise.
In Asia, open access adoption is accelerating, yet the legal and structural underpinnings of this openness remain fragile, with significant licensing and copyright confusion.
An AAAS survey reveals authors’ concerns and confusion regarding open licensing of their work.
We asked the Chefs for their thoughts on two important court decisions on the legality of using copyrighted materials for AI training.
Changes in Library of Congress leadership could have profound impacts on copyright and intellectual freedom.
We are expecting the US Government’s AI Action Plan to be issued over the summer. In the meantime, we may glean some of the administration’s views by looking at recently issued information from the Office of Management and Budget (OMB).
Model licenses simplified library licenses in the 2000s. The same approach can streamline licensing scholarly content for AI training today.
The first AI training case has been decided in the US in favor of the copyright holder.
“Rights reservation language, whether in plain English, included in terms, or coded into, e.g., metadata, is “machine readable.” It is a choice by an AI developer to not read “human readable” rights reservation language.”
As a result of EU law and other factors, rights holders are reserving their AI rights. This material is available for AI training/licensing.
Robert Harington attempts to reveal inherent conflicts in our drive to be as open as possible, authors’ need to understand their rights, and a library’s mandate to provide their patrons with the enhanced discovery that comes with AI’s large language models (LLMs).
While Aretha Franklin’s “Chain of Fools” referred to betrayal of trust in love, when it comes to AI use of our work, writers feel betrayed by those who should be protecting our intellectual and creative property.
Several weeks ago, the Internet Archive lost its appeal of the lawsuit brought by a group of publishers opposed to its controlled digital lending programs. Roger Schonfeld examines what can be learned from this fair use defeat.