Let’s Be Cautious As We Cede Reading to Machines
AI might help with the deluge of content, but there are problems when we rely on machines to think for us.
AI might help with the deluge of content, but there are problems when we rely on machines to think for us.
As we strive for a more equitable and inclusive future, how can we foster the well-being and potential of every individual, regardless of their ethnic or racial background?
Balancing the anxiety and the excitement over the use of Large Language Models (LLMs) in scholarly publishing.
Themes and ideas from the Fortune Brainstorm AI. “People won’t lose their jobs to AI; they’ll lose their jobs to people that are using AI.”
Escalating attacks on the humanities often cite the problem of employment for humanities majors; a new report shows otherwise.
As we contemplate a pause during the holiday season, we must ask ourselves: Isn’t the researcher’s overall well-being as crucial as the research itself?
We asked the Chefs for their thoughts on the Biden Administration’s Executive Order on “Safe, Secure, and Trustworthy Artificial Intelligence.”
The beginning of the holiday season means it’s time for our annual list of our favorite books read (and other cultural creations experienced) during the year. Part 2 today.
We asked the Chefs to weigh in with their thoughts on the new “Towards Responsible Publishing” manifesto from cOAlition S.
Generative AI wants to make information cheap, but will people want to read it? Are we ready for more productive writers?
We all know the journals market has rapidly consolidated over recent years. But where’s the data? I set out to find some numbers to put behind the common sense.
Functional silos lead to customer data silos. Can you get a full view of customer engagement without re-architecting your whole organization?
With yet another stumble from Twitter/X, Angela Cochran looks at the numbers and asks whether all the efforts journals have put into building and maintaining journal Twitter accounts have been worth it.
A mixed bag post from us — can you separate out the significance of research results from their validity? What will the collapse of the Humanities mean for scholarly publishing writ large? And a new draft set of recommended practices for communicating retractions, removals, and expressions of concern.
Are there enough reviewers though to meet demand and is the peer review process efficient enough to handle the sheer volume of papers being published? How can a combination of human expertise and AI make the peer review process more efficient?