I recently attended the Gartner webinar on the top strategic technology trends for 2025, presented by Gene Alvarez, an analyst for Gartner.
During the webinar, Alvarez highlighted key trends, including the rise of ‘Agentic AI’, new frontiers in computing such as hybrid and post-quantum cryptography, and advances in human-machine synergy. But while these exciting trends are set to change the landscape across many industries, what do they mean for scholarly publishing?
In this post, I focus on four of the trends talked about in the webinar that are most relevant to our industry and discuss the challenges and opportunities they might bring.
1. Agentic AI
Agentic AI systems are able to operate autonomously in a goal-directed manner (similar to how human agents act with intentionality). They use advanced capabilities such as memory, planning, sensing, and tooling to make decisions and take actions independently, often in complex and dynamic environments. At Wiley, we are already using agentic AI in Operations to drive customer service efficiencies and developing pilots to bring Agents to other job functions.
What does the advent of agentic AI mean for scholarly publishing more broadly? These systems could be applied across the publishing journey to increase efficiency, improve integrity, and create a more personalized experience. For example, they may support:
- Content creation and enrichment: Extract key entities, concepts, and metadata from manuscripts by automatically selecting the best tools for different formats and automatically generating abstracts, summaries, and podcasts based on needs and situations. This will enhance capabilities and accuracy to handle global diversity in content.
- Integrity checks: Ensure adherence to ethical guidelines and automatically decide which tools to use to detect unethical practices like plagiarism, image manipulation, or paper mill activity, and decide whether further investigation is needed with the support of other tools and human expertise.
- Knowledge discoverability and accessibility: Facilitate personalized discovery by enabling conversation discovery and personalized recommendations. Users can specify their goals, with the agent automatically breaking down tasks and deciding what and where to search, and which tools to use. In the future, websites may be enhanced or even replaced by agents, whereby users can interact with the agents directly without having to click a web link to find information themselves. For accessibility, real-time multilingual translations could be provided for global events to bridge language barriers.
- Workflow optimization: Monitor and optimize the publishing journey from submission to peer review to publication, predicting bottlenecks and autonomously suggesting corrective actions. This can reduce operational expense while freeing up staff to focus on strategic activities.
While Agentic AI offers exciting opportunities, successful implementation will depend on addressing ethical concerns (ensuring ethical behavior and alignment with human values), establishing reliability and trust (with human oversight and adoption where applicable), and ensuring seamless integration with existing workflows.
2. AI governance platforms
These are systems designed to ensure the responsible, ethical, lawful, and transparent use and development of AI. They help organizations manage risks, comply with regulations, and align AI practices with ethical standards by embedding governance mechanisms throughout the AI lifecycle.
AI governance currently lags far behind AI capabilities. And although many organizations have AI governance committees set up to manage AI regulations, AI governance platforms are still needed. Not only do they help monitor and mitigate biases in AI models, but they also systematically assign accountability and embed ethical and privacy standards into AI management. Increasing numbers of these platforms are appearing, such as Credo AI and Holistic.
There are several potential applications of AI governance platforms in scholarly publishing:
- Checking whether AI-generated content adheres to publisher guidelines and ethical principles.
- Detecting and monitoring bias for recommendations such as reviewer and journal suggestion.
- Monitoring integrity detection tools for fairness and accuracy.
- Monitoring the access and control of personal data in AI training.
- Categorizing the AI systems used in an organization and auditing to ensure they comply with regulations such as the EU AI Act.
While AI governance platforms can enable proactive risk management and improve compliance with AI policies (thus enhancing transparency and trust), there are still some limitations. For example, implementation is complex, involving technical expertise, policy and operational changes, and organizational alignment. Keeping up with and embedding dynamic and region-specific AI regulations where applicable, in one governance platform is also challenging.
3. Disinformation security
This generally refers to the strategies, tools, and processes designed to detect, prevent, and mitigate the spread of false or misleading information intentionally created to deceive, defraud, or manipulate an audience. In scholarly publishing, disinformation security is critical to protecting the integrity of the research record and maintaining trust among researchers, institutions, and the public.
Currently, the majority of solutions are reactive – they focus on detecting unusual patterns and verifying the authenticity of authors and reviewers based only on manipulated content. These solutions suffer from data limitations where insufficient or biased training data can impact the effectiveness of integrity detection. High false positives can misidentify genuine work as fraudulent, increasing costs and reducing trust in detection.
Many organizations are aiming to tackle these challenges in proactive ways. For example, Project Origin,which is described below by the Coalition for Content Provenance and Authenticity (C2PA) standards body as:
“An alliance of leading organizations from the publishing and technology worlds, working together to create a process where the provenance and technical integrity of content can be confirmed. Establishing a chain of trust from the publisher to the consumer.”
Publishers are in a good position to participate and collaborate with others to tackle Artificial Intelligence Generated Content (AIGC) issues, such as disclosure of the source material powering the solutions.
Watermarking solutions and provenance technologies such as Google’s SynthID are promising solutions but take a different approach compared to traditional AIGC detectors like GPT detection and iThenticate. The source (or publishers’ information) could also be embedded in a watermark, which publishers would apply when creating the AIGC content. Doing so will not only help detect AIGC later, but could help protect and track publishers’ content and solve copyright and licensing issues.
But, there are still limitations and questions. For a start, it’s easy to tamper with watermarks. And would commercial LLMs integrate them as a default, or only provide a mechanism for whoever wants to use them? The more users that apply them, the better the system. Additionally, would LLMs have their own watermark solutions, or will they share the same one? And would this work only for closed source LLMs such as GPT-4 and Google Gemini, or for open source such as Meta’s Llama as well?
STM has recently proposed a mechanism for assessing image integrity further upstream in the research process and certifying images at the point of original creation. While this is a good idea to help detect image integrity issues, my concern would be around the feasibility and effort of deployment and operation, given the levels of complexity and collaboration (among instrument manufacturers, vendors, publishers, policy makers, etc.) involved. There are also other challenges, such as the focus on authenticity rather than manipulation, and the inability to distinguish between normal beautification and manipulation.
It’s clear that combating disinformation is a long-term effort that requires multiple technologies and cross-functional teams. Its effective implementation demands not only robust tools, but also close cooperation among publishers, institutions, and authors, which can be challenging to coordinate.
4. Spatial computing
As many of us foresee, the metaverse is coming. Human-machine synergy is going to become popular in scholarly publishing too, because it involves collaborative systems where humans and machines work together to achieve outcomes neither could accomplish alone.
Spatial computing is an important technology for human-machine synergy. It integrates physical and digital objects into a shared 3D space, enabling users to interact with digital content in ways that feel natural – such as through augmented reality (AR), virtual reality (VR), or mixed reality (MR). Although these are becoming more popular in industries such as logistics and manufacturing, there is still some way to go in scholarly publishing.
I believe that spatial computing has the potential to significantly enhance publishing workflows, research dissemination, and audience engagement in the future. For example:
- It can make research outcomes and presentation more interactive and intuitive. Researchers can present their findings using 3D models and AR-enhanced visuals, or even walk through a virtual lab or engage with dynamic, data-driven visualizations.
- AR features can be integrated into journals or books to allow readers to better understand images and diagrams with interactive explanations. This will enrich scholarly content and differentiate publishers in the market.
- Academic conferences and events could host virtual exhibits, poster sessions, or networking spaces, transcending physical location and language limitations.
- Editors and reviewers could interactively check image duplication, quickly understand what supplementary datasets are available, and intuitively visualize the relevancy, novelty, and quality of the manuscript.
Although spatial computing can improve collaboration and user experience, it still faces several challenges which mean it hasn’t yet been widely adopted:
- Technology barriers: Hardware requirements like AR glasses or VR headsets has led to limited adoption. Plus, developing AR/VR tools, maintaining spatial computing platforms, and expanding this ecosystem involve significant investment.
- Content creation complexity: Converting existing materials into spatial content or creating new ones requires expertise, resources, and technologies.
- Learning curve: It takes time for people to start using and realizing the benefit from any emerging technologies – just like with the internet and AI.
Overall, spatial computing holds transformative potential for scholarly publishing, enabling the industry to move beyond static content and create richer, more impactful experiences.
The future is undeniably exciting, with transformative possibilities on the horizon, but there is much that needs to be addressed to ensure these advancements are inclusive, ethical, lawful, reliable, and impactful.
Discussion
4 Thoughts on "Navigating the Digital Frontier: How Emerging Tech Trends Are Shaping Scholarly Publishing"
was this WRITTEN by AI? Be honest. . .
Thank you for your comment. The key insights and reflections in the post are based on my thoughts and experience, inspired by Gartner’s webinar. To ensure clarity and polish, I collaborated with a human copywriter and used AI to refine the language. This approach helps me communicate my ideas more effectively while maintaining authenticity.
My wish for 2025 is that we can leave posts about ‘AI’ behind in 2024 🙂
Thank you for your comment. Indeed, AI has been embedded in tasks and applications as backbone by default in near future. Instead of talking about AI, people will mainly talk about real applications.