Guest Post — Hanging in the Balance: Generative AI Versus Scholarly Publishing
Balancing the anxiety and the excitement over the use of Large Language Models (LLMs) in scholarly publishing.
Balancing the anxiety and the excitement over the use of Large Language Models (LLMs) in scholarly publishing.
The short story “The Library of Babel” by Jorge Luis Borges provides an opportunity to consider the veracity of AI-generated information.
Themes and ideas from the Fortune Brainstorm AI. “People won’t lose their jobs to AI; they’ll lose their jobs to people that are using AI.”
In today’s Kitchen Essentials interview, Alice Meadows asks Chris Shillum, Executive DIrector of ORCID, to share his thoughts about his career in research infrastructure
Introducing the AI in Scholarly Publishing Community of Interest (CoIN), the SSP’s latest offering to all its members to explore and engage in all matters AI as they relate to scholarly publishing.
Academia has developed an amazing tree of knowledge which is arguably the most important data for Large Language Models to be trained on. Where does the scholarly communication community fit in?
Separately, both open research and AI are considered disrupters, causes of disorder in the normal continuance of scholarly publishing. But approaching them in a synchronized way can offer more productivity gains and efficiencies than taking them on individually.
Today, Alice Meadows and Roger Schonfeld introduce a new interview series – Kitchen Essentials – featuring leaders of some of the key scholarly infrastructure organizations globally.
Some beautiful winners in this year’s Nikon Small World in Motion video microscopy competition.
With yet another stumble from Twitter/X, Angela Cochran looks at the numbers and asks whether all the efforts journals have put into building and maintaining journal Twitter accounts have been worth it.
Robert Harington provides a template for scholarly societies wondering how to grapple with the overwhelming and omnipresent prospect of an AI future.
In 2023, AI has been back in the news in a big way. Large Language Models and ChatGPT threatened our’s and many other industries with huge disruption. As with so many threatened techno-shocks, a large degree of this one was hype, but what will happen after the hype fades. What, if anything, will be the lasting legacy of ChatGPT?
Human-dependent peer review is inequitable, suffers from injustice, and is potentially unsustainable. Here’s why we should replace it (eventually) with AI-based peer review.
In today’s Peer Review Week guest post, Joe Pold of PLOS interviews the senior editorial team of PLOS Computational Biology about their experience of mandating code sharing for the journal, and its impact on peer review
How machines learn, as demonstrated by a pile of matchboxes playing tic-tac-toe.