Note: This article was adapted from Haseeb Irfanullah’s talk at the panel discussion on ‘Exploring the Pros and Cons: The Role of Artificial Intelligence in the Scholarly Publishing Industry’ at the 9th annual meeting (virtual) of Asian Council of Science Editors (ACSE).
Despite concerns over the potential for Artificial Intelligence (AI) to undermine publishing integrity, its role in the publishing workflow, as well as its impact on the industry, have widely been explored in recent months. Analyses looking at everything from AI’s use in writing research articles, submitting articles to journals, article review, article production, article publishing, and finally to disseminating and ensuring discoverability of articles have dominated much of the publishing conversation (examples include posts from The Scholarly Kitchen about leveraging AI and Big Data and Submission and Review, and Contech looked at rapid technological change in the information industry).
But what uses for AI might we expect outside of the publication workflow? Some answers to this question can be found through the lenses of sustainability, justice, and resilience.
Over the last three years, sustainability has increasingly been discussed in the publishing arena. Momentum was gained in late 2020, when the UN and the International Publishers Association (IPA) proposed the SDG Publishers Compact. While being a signatory to the Compact is straightforward, following its 10 action points in the real world can be challenging. AI can help individual journals, publishers, and the larger publishing sector to measure progress towards the Compact, to assess research impact on the ground, and to understand the effectiveness of our approaches to meet the Sustainable Development Goals (SDGs) as a whole, through:
1) Linking individual actions to the wider sector
Publishers can use AI to introduce an ‘SDG Meter’ on a journal’s homepage along with other metrics announcing the journal’s impacts. It could be done in several ways: i) how each issue/volume of a journal is contributing to each of 17 SDGs by creating new knowledge; ii) how citations of these SDG-linked articles are subsequently contributing to the relevant SDG(s) over the years; and iii) these numbers may be further linked to the broader assessments and measurements of progress towards the SDGs by reputed authorities, such as the UN, global think tanks, or even the host of the SDG Publishers Compact. In this way, the scholarly publishing sector can be a part of the wider sustainable development journey by communicating the created knowledge.
2) Measuring research impact
In the scholarly world, we widely use citation metrics to measure journals’ as well as articles’ impact. But tracking research impacton the ground and subsequently measuring it is something else. Individual case studies could be written to showcase how research in a particular sector or discipline led to positive transformation. But this fails to paint a broader picture, so, AI could be used to develop a tool that will collate citations and in-text non-cited reference points from a wide range of sources, such as policy and planning documents, briefs, legislative instruments, development project reports, and media reports to synthesize the impact of research on improving policies and practices, thus human well-being. BMJ Impact Analytics has started using this approach in the health sector. There is a fantastic opportunity for other disciplines to build on BMJ’s experience.
3) Evaluating SDG-mainstreaming tools
Feedback forms or scores by users of tools meant to help mainstream the SDGs can be registered and displayed on the repositories of such tools (e.g., IPA’s SDG Dashboard) and/or on the specific tool’s webpage (e.g., SDG Publishers Compact Fellows’ Top Action Tips and STM’s SDG Sustainability Roadmap). This will help to rank each tool and help potential users to choose from different instruments based on their effectiveness.
The concepts and approaches to improve diversity, equity, inclusion, and accessibility (DEIA) in scholarly publishing have gained momentum in recent years to reduce gaps among races, genders, abilities, career levels, and geographies. But there are certain elements of justice which are not often talked about. For example, individuals are voluntarily involved in the scholarly arena as reviewers for journals; mentors in mentorship programs; members of advisory/editorial boards, working groups, award judging panels; and participants in webinars, but without being recognized across the system for their multifaceted contributions. AI can help us not only to get recognition of our voluntary contributions, but also reduce our undue burden as peer-reviewers, remove elements of injustice associated with the review system, improve access to digital contents by persons with disabilities, and reduce different forms of inequities in the publishing landscape.
1) Building a ‘Voluntary Contribution Transaction System’
Voluntary contributions of individual scholars, as noted above, are often not valued comprehensively nor tangibly. An AI-assisted ‘point transaction system’ can be introduced to meet the gap by linking it to a persistent digital identifier (e.g., ORCID iD). Each voluntary contribution to the scholarly system—reviewing papers, moderating discussions, or mentoring early-career researchers—will add points to the individuals’ accounts. These ‘voluntary contribution points’ could be transferred from publishers and event organizers to an account held associated with the person in question. Individuals could then use these points to pay article processing charges (APCs), journal subscription fees, society membership fees, or registration fees to attend events. I plan to elaborate the proposed point transaction system in The Scholarly Kitchen in near future.
2) Using AI-assisted review
Although review is a crucial step in the journal publishing workflow, there is a justice element to it. Depending on reviewers’ altruism and considering them as (often nameless) volunteer gatekeepers of a US$ 10-billion business, where some publishers’ profit margins exceed that of Amazon, Apple, or Google, indeed raises a question of justice. One way of converting this unjust situation to a just one is paying the reviewers, which remains a subject of controversy. AI can make desk rejection before peer review faster by quickly assessing whether the manuscript’s scope, overall structure, and basic aspects of research integrity meet the requirements of the journal. AI-assisted review can also reduce pressure on human reviewers by checking manuscripts and identifying weakness in reviews, and by helping the editors/journals who are struggling to get enough competent reviewers to work for them. Preprint servers can also benefit from AI-assisted review, despite its possible weakness such as algorithmic biases. If the first review of all preprints is done by AI within a few days of submission, human reviewers can then join in the open review process by building on the AI-assisted review reports. Recently, a similar blending of trained human interventions and AI tools has also been advocated as a realistic step forward to ensure research integrity.
3) Improving access to digital contents by the persons with disabilities
One in every 10 persons of the world has some kind of disability: visual, hearing, mobility, or learning/cognitive. These cause disproportionate levels of access to scholarly digital contents. In June of this year, Hong Zhou and Sylvia Izzo Hunter wrote about how AI can offer solutions to overcome these challenges and improve accessibility.
4) Reducing diverse forms of inequity
The Global North and the Global South not only show economic and technological divisions, but also have gaps in research accessibility, and diversity in cultures, norms, and perceptions about scholarly publishing. While international collaborative research in AI is low, AI can facilitate international collaboration through acting as the binding element of global collaboration. AI can identify collaboration opportunities between the Northern and Southern scholarly publishers and their societies for exchange of ideas and sharing of resources, thus advancing the disciplines that they both belong to. To survive with limited resources, small scholarly associations and journals (even in the Global North) can leverage each other’s capacity, resources, and networks. AI-assisted networking tools and platforms and AI-guided co-creation processes can help to achieve that. Based on DEIA guidelines, AI can also help them to develop a decision-making and planning application to identify and advise on areas needing DEIA-related interventions. This can further be used to develop scorecards to measure the progress in implementing DEIA plans.
While the publishing industry survived during and recovered well from the COVID-19 pandemic, many shocks and stresses (e.g., unethical publishing practices; policy and legislative changes by governments and funders) continuously affect the industry. AI can help to build our resilience: i) by absorbing known shocks/stresses through protective measures, ii) by adapting to them on a longer-term basis by making incremental adjustments and changes in actions, and iii) eventually by transforming our governance and regulations, cultures and norms, and seeing our role beyond the publishing operations, for an equitable publishing landscape.
1) Fighting unethical publishing practices
AI can address this challenge in a number of ways. First, before submitting a manuscript to a journal, authors can use an AI-supported tool to check if a journal is authentic, ethically sound, and exhibits publishing integrity. Building on the available checklists or guidelines (e.g., Think, Check, Submit), such an application could look for certain vital cues by searching journal websites and pertinent sources (e.g., search of editorial board members’ webpages and social media activities). Currently, these types of searches are manually done by authors, often from low- and middle-income countries as journal blacklists are found to be controversial, given the grey areas in calling a journal “predatory”, and since some of the black- and whitelists are behind paywalls. Building on the experience of systems, such as academic journal predatory checking (AJPC), can also be useful, especially when such systems are open. Second, there are concerns about AI-generated articles, both legitimate research written up by AI tools and completely fabricated manuscripts. In addition to using AI tools as the first line of quality checking, if journals ask for accompanying data sets along with manuscripts, AI tools could check the reproducibility of the research and thus reduce the chance of publishing articles with fake data. Nevertheless, for now, we probably are far away from detecting AI-generated text in a fool-proof manner and this remains a significant concern.
Third, earlier this year, controversies over paper mills hit a few mega-journals hard. This is indeed a failure on the publishers’ side to properly supervise their Guest Editor / Special Issues model judiciously. This is also a failure on the indexing agencies’ part in ignoring the downside of the absolutely quantitative way of measuring journal impact in a rapidly changing publishing world which continuously evolves to game these systems. This hasn’t only cost the publishers hundreds of millions of dollars of business, but also caused huge non-economic damages like losing trust of the authors and readers. To address the paper-mill challenge, Wiley’s Jay Flynn proposed to “Increase our investment in both expertise and technology to support the early identification of unethical publishing behavior.” Building on recent such efforts, publishers can use AI to develop a robust early warning system to identify if a paper mill is active within a journal. Such alert system could also be adopted by indexing agencies as a part of their commitment towards the wider scholarly community for ensuring publishing integrity.
2) Responding to ever-transforming governments’ and funders’ policies
AI can help public policy-making and can also help organizations to align their actions against new and updated public policies. By using AI, publishers and societies can develop applications to help identify the pathways to respond to relevant policy changes by governments, funders, or multilateral agencies. Such decision-making tools may suggest what changes are needed within a publisher’s policy, at what level, by whom, to what extent, over what time period, causing what financial implications, and with what implications for non-compliance, for example. Such applications could also generate reports that could feed into evaluation of the public policy in question.
If we start thinking of the use of AI beyond the publishing workflow, it can also help us to synergize many publishing activities around sustainability, justice, resilience, or really any other lens we choose. DOIs and ORCID iDs are fantastic, simple, but deep examples how innovations can build solidarity among different actors in the publishing industry by giving universal identity to creations and their creators, respectively. Can we expect AI to be used in such a way that it acts as an ‘aggregator’ of what we have individually achieved so far, as an ‘accelerator’ to lead us to the pace we want to reach, and as an ‘appraiser’ of our progress, thus becoming a vital element of our solidarity?