We Could Use a Model Licensing Framework for Scholarly Content Use in AI Tools
Model licenses simplified library licenses in the 2000s. The same approach can streamline licensing scholarly content for AI training today.
Model licenses simplified library licenses in the 2000s. The same approach can streamline licensing scholarly content for AI training today.
The first AI training case has been decided in the US in favor of the copyright holder.
“Rights reservation language, whether in plain English, included in terms, or coded into, e.g., metadata, is “machine readable.” It is a choice by an AI developer to not read “human readable” rights reservation language.”
As a result of EU law and other factors, rights holders are reserving their AI rights. This material is available for AI training/licensing.
Robert Harington attempts to reveal inherent conflicts in our drive to be as open as possible, authors’ need to understand their rights, and a library’s mandate to provide their patrons with the enhanced discovery that comes with AI’s large language models (LLMs).
While Aretha Franklin’s “Chain of Fools” referred to betrayal of trust in love, when it comes to AI use of our work, writers feel betrayed by those who should be protecting our intellectual and creative property.
Several weeks ago, the Internet Archive lost its appeal of the lawsuit brought by a group of publishers opposed to its controlled digital lending programs. Roger Schonfeld examines what can be learned from this fair use defeat.
Publishers should support scholarly authors by requiring license deals with AI developers include attribution in their outputs.
In copyright law, the existence of licensing options impacts upon a rights owners exclusive rights.
Robert Harington talks to Dr. Amy Brand of MIT Press, in this series of perspectives from some of Publishing’s leaders across the non-profit and for-profit sectors of our industry.
Legislation often lags technological advances. The EU’s Digital Single Market Copyright Directive leaves many open questions regarding AI text- and data-mining.
Before we launch into 2024, a look back at 2023 in The Scholarly Kitchen.
We asked the Chefs for their thoughts on the Biden Administration’s Executive Order on “Safe, Secure, and Trustworthy Artificial Intelligence.”
A selection of questions and answers from Copyright Clearance Center’s response to the United States Copyright Office “Artificial Intelligence and Copyright” request for comment.
A report of the Chef’s panel on AI, Open content, and research integrity during the Frankfurt Book Fair.