Guest Post — The Case For Not Citing Chatbots As Information Sources (Part II)
Citing chatbots as information sources offer little in terms of promoting smart use of generative AI and could also be damaging.
Citing chatbots as information sources offer little in terms of promoting smart use of generative AI and could also be damaging.
If you use a chatbot in writing a text, and are discouraged from listing it as a coauthor, should you attribute the relevant passages to the tool via citation instead? Is it appropriate to cite chatbots as information sources?
ChatGPT has popularized generative AI, but interpretive AI has quietly remained in the shadows. Interpretive AI offers profound insights into content and audience engagement, a critical tool for publishers aiming to harness the full potential of AI.
Balancing the anxiety and the excitement over the use of Large Language Models (LLMs) in scholarly publishing.
The short story “The Library of Babel” by Jorge Luis Borges provides an opportunity to consider the veracity of AI-generated information.
It’s been “the year of generative AI”, so Charlie Rapple asked ChatGPT to write some cracker-standard Christmas jokes with a scholarly communications theme.
Themes and ideas from the Fortune Brainstorm AI. “People won’t lose their jobs to AI; they’ll lose their jobs to people that are using AI.”
Introducing the AI in Scholarly Publishing Community of Interest (CoIN), the SSP’s latest offering to all its members to explore and engage in all matters AI as they relate to scholarly publishing.
Academia has developed an amazing tree of knowledge which is arguably the most important data for Large Language Models to be trained on. Where does the scholarly communication community fit in?
Separately, both open research and AI are considered disrupters, causes of disorder in the normal continuance of scholarly publishing. But approaching them in a synchronized way can offer more productivity gains and efficiencies than taking them on individually.
Generative AI wants to make information cheap, but will people want to read it? Are we ready for more productive writers?
Robert Harington provides a template for scholarly societies wondering how to grapple with the overwhelming and omnipresent prospect of an AI future.
In 2023, AI has been back in the news in a big way. Large Language Models and ChatGPT threatened our’s and many other industries with huge disruption. As with so many threatened techno-shocks, a large degree of this one was hype, but what will happen after the hype fades. What, if anything, will be the lasting legacy of ChatGPT?
The challenges offered by artificial intelligence require a different approach than that seen for plagiarism detection.
Was a recent Scholarly Kitchen piece analyzing the capabilities of ChatGPT a fair test? What happens if you run a similar test with an improved prompt on LLMs that are internet connected and up to date?