Guest Post — Reflecting on a Decade with the Open Discovery Initiative: Insights from IEEE
Julie Zhu reflects on the IEEE’s journey with the Open Discovery Initiative (ODI) and the benefits of ODI conformance statements.
Julie Zhu reflects on the IEEE’s journey with the Open Discovery Initiative (ODI) and the benefits of ODI conformance statements.
Robert Harington provides a template for scholarly societies wondering how to grapple with the overwhelming and omnipresent prospect of an AI future.
Inequities are rife in the research process, starting with the pre-award process. Based on feedback and input from researchers, research managers, and others a new report looks at the challenges and makes recommendations for how funders and institutions can address them.
Was a recent Scholarly Kitchen piece analyzing the capabilities of ChatGPT a fair test? What happens if you run a similar test with an improved prompt on LLMs that are internet connected and up to date?
How does the shift to interdisciplinary research reshape the very foundation of how knowledge is generated and applied across various fields and what do the different stakeholders in academia need to do to balance the depth of specialized knowledge with the breadth of interdisciplinary understanding?
Studying the way we’ve studied the past is mutual work. Archivists and librarians, and scholars using their collections, have each been producing critical archives scholarship that too often remains within disciplinary and professional siloes.
What are the burdens researchers face? And what can be done to lighten the load and make the academic environment more diverse, equitable, inclusive, safe, and welcoming?
Policies that formally give peer reviewers the option to officially invite a colleague to collaborate with them improve integrity, transparency, and offers a chance to give fair credit where it is due.
An update on how generative AI has progressed and how it has been applied to research publishing processes since ChatGPT was released, looking at business, application, technology, and ethical aspects of generative AI.
The current uproar over artificial intelligence does not show us what the future of AI will look like, but rather how a human population falls into predictable patterns as it contemplates any new development: we are observing not AI but ourselves observing AI.
Haseeb Irfanullah discusses how Communities of Practice can improve scholarly communications by capitalizing on our collective experiences.
The Supreme Court has ruled in the Andy Warhol–Prince fair use case. What does this mean for scholarly communications, and the reuse of materials for AI training?
Raymond Pun, Sai Deng, and Guoying (Grace) Liu on the challenge of advocating for diversity, equity and inclusion within scholarly communications when your own institution isn’t “there” yet.
Data quality and record keeping are going to grow in importance as a result of AI applications.
The Data Hazards project looks at the problems in applying traditional ethical values to research that uses machine learning and artificial intelligence.