Ensuring attribution is critical when licensing content to AI developers
Publishers should support scholarly authors by requiring license deals with AI developers include attribution in their outputs.
Publishers should support scholarly authors by requiring license deals with AI developers include attribution in their outputs.
The FORCE11 conference at UCLA lays the groundwork to continue its efforts to transform research communications and e-scholarship.
A look at how AI tools support transforming information access into information comprehension.
To learn about how Scopus AI works under the hood, we interview Elsevier Sr. VP of Analytics Products and Data Platform, Maxim Khan.
In this post by Todd Carpenter, Phill Jones, and Alice Meadows, you can read all about PIDfest, which brought together nearly 400 persistent identifier users and providers from around the world (in person in Prague, and virtually).
In today’s Kitchen Essentials, Roger Schonfeld speaks with Wendy Queen, Director, Project MUSE, a leading provider of digital humanities and social science content for the scholarly community around the world.
Research publications contain the answers to some of the world’s most pressing challenges. But to realize that potential, more people need to find, understand and act on them.
What does the research tell us about how dogs see the world?
The gaps in capability of AI will narrow over time, but publishers and end users need education on those gaps to make investment decisions and to confidently utilize Generative AI tools effectively.
We asked Campus Disability Services leaders, “What would you most like Publishers to know?”
In this post – the first of two discussing artificial intelligence and information discovery – we explore the evolution of information discovery, its role in the research journey, and how it can be applied to help researchers and publishers alike.
Part one of a look back at the Publisherspeak meeting — today’s themes: author experience (AX) and AI.
The latest STM Trends is out, showing a future where humans and machines are integrated and engaged, supporting research and output sharing.
Journal articles with ChatGPT authored text are being found. How common is this in the literature? And how, or better yet, when, is this problematic text slipping through to publication?
The internet was not designed to provide a permanent digital record of scientific research. This post looks at current approaches to addressing the shortcomings of the existing Internet technology, identify remaining bottlenecks, and suggest how they could be resolved. Upgrades to the backbone of the scientific record could go a long way toward addressing the replication crisis and the increasing challenges for publishers to spot fake research.