We’re used to thinking about AI through the lens of scholarly publishing, but exploring its application in other industries is incredibly valuable.

At the 3rd Generative AI Summit in London, global leaders and companies shared how they’re embedding generative AI into strategies, workflows, and products for commercial success, operational efficiency, and competitive advantage.

For C-level executives in scholarly publishing, the event highlighted the need for generative AI (GenAI) to evolve into a business-critical capability — without compromising the trust and authority essential to scholarly communication.

Here, we’d like to share key takeaways and insights from multiple perspectives and explore what they mean for publishers.

Illustration for AI strategy. A small person with a telescope stands on the hand of a large robot, looking forward while the background contains AI-related images.

1. Strategy Alignment and Cultural Shifts in AI Adoption

Align AI Initiatives with Core Business Goals

Industry leaders agree that AI projects should be mapped to measurable business objectives (impact, compliance, scalability, sustainability). AI is moving from R&D/proof-of-concept to real applications.

For Publishers: AI efforts should enhance strategic goals like publishing integrity, author productivity, and operational efficiency. Defining and tracking business-aligned outcome metrics (fraud cases flagged, author satisfaction improvement, time saved per manuscript) is crucial to prevent AI becoming an isolated R&D exercise.

Organizational AI Tension Points

Many organizations are still determining AI strategy ownership. There’s a notable disconnect between leadership (75% feel confident in AI adoption) and staff (45% feel their company is successfully adopting AI).

For Publishers: This disconnect might exist between C-level vision and operational realities across editorial, peer review, and production teams. Publishers must bridge this gap through clear communication, cross-functional alignment, and shared ownership of AI initiatives.

Responsible AI Principles as Core Strategy

Human oversight, fairness, transparency, accountability, and privacy were emphasized. AI systems must be auditable, interpretable, and secure by design.

For Publishers: This is fundamental when designing and applying AI in content generation, information discovery, and potentially peer review. Publishers must ensure editorial boards, authors, readers, and reviewers trust the AI-enhanced processes.

AI Literacy and Responsible AI Policy

The summit highlighted AI literacy as crucial for successful adoption. Training and policies should be clear, role-specific, and engaging. According to one speaker, the UK newspaper Financial Times improved employee productivity by about 11% by rolling out targeted AI training in under a year.

For Publishers: Educating employees and users on AI’s capabilities and responsible use is essential. A living AI usage policy should support this with practical guidance, improving confidence and performance across publishing functions.

Building Cross-Company AI Communities

Many organizations have created internal AI taskforces to encourage shared learning and sustainable AI integration.

For Publishers: Cross-functional groups help break down silos between technology, product, editorial, compliance, and legal teams, encouraging responsible experimentation and collaborative governance.

The Publishing Paradox: Monetize or Be Marginalized

The very technology that threatens to disintermediate publishers also offers unprecedented opportunities for growth. Those who view AI merely as another technology trend risk severe market displacement.

For Publishers: This means reconceptualizing their role in the AI ecosystem — not just as content providers, but as trust guarantors, context creators, and knowledge navigators. Publishers become guides, educators and ‘referees’, enabling and empowering others to make the right decisions when using AI. This shift demands both strategic investment and careful risk management.

2. Governance, Risk and Ethical Compliance

Comprehensive AI Risk Governance

Firms are aligning with frameworks like ISO/IEC 42001 and NIST AI RMF, emphasizing risk inventories, controls, and feedback loops.

For Publishers: Each AI tool used should be formally tracked and evaluated for ethical, legal, and operational risk — especially where editorial decisions are involved.

AI Risk in Enterprise Taxonomies

AI risks are now part of enterprise-level risk models which are focused on reputational, regulatory, and operational risks like bias, hallucinations, and deepfakes.

For Publishers: This reinforces the need to include AI risks in enterprise-wide governance models.

Regulation Is Catching Up

Regulatory bodies are actively shaping the AI landscape. Compliance (e.g., with GDPR, EU AI Act) is becoming critical to deployment decisions.

For Publishers: Build regulatory compliance into AI strategy, track relevant global developments, and adapt risk frameworks accordingly. Consider compliance early in AI adoption and development.

Standardized AI Intake and Approval

Firms are standardizing how AI tools are reviewed and approved. Unregulated use of public AI tools can introduce risks for IP, privacy, and security.

For Publishers: Offer editors, authors, and staff trusted AI capabilities — avoiding the risks of shadow AI tools.

Human Oversight Remains Critical

AI must not act autonomously in high-risk areas. Human review is essential for decision-making, especially around sensitive content or life-critical domains.

For Publishers: AI can assist but not decide. This principle should guide how we design human-in-the-loop systems — with people, not AI, owning the final call.

Deepfakes as a Growing Integrity Risk

The misuse of generative AI to create deepfakes is escalating. Tackling AI misuse requires multiple levers: preventing harmful generation, tagging synthetic content, detecting deception, and enforcing rules against bad actors.

For Publishers: This affects scholarly publishing through fake author profiles, AI-generated manuscripts, or fabricated datasets/images. Adopt layered defences such as prevention (policy), detection (AI integrity checks), tagging (provenance), and enforcement (flagging repeat offenders).

Enterprise-Grade AI Governance

AI tools or agents need access to data, but must be governed for privacy, permissions, and compliance — like enterprise-grade security models.

For Publishers: AI systems touching manuscripts, author data, peer review, or usage analytics must meet the same governance and security standards as core publishing platforms.

The Five Non-Negotiables for Risk Mitigation

The speakers at the AI summit agreed on five essential guardrails for scaling AI:

  1. Data Governance First: Clean citation networks, structured metadata, and disambiguated authorship information are crucial for AI functionality and risk mitigation.
  2. Human-in-the-Loop Oversight: Maintain editorial authority through strategic human touchpoints in AI workflows — particularly for academic integrity, factual verification, and ethical considerations.
  3. Transparent Provenance: Develop clear standards for attribution, disclosure of AI involvement, and source traceability to build trust.
  4. Inclusive Design: Include diverse voices, languages, and perspectives in your training data and product design.
  5. Cross-functional Governance: Establish AI “councils” bridging technology, editorial, legal, and business functions to prevent siloed projects that fail to scale.

3. AI Application Trends, Deployment and Data

Timeline for Generative AI Integration

Gartner’s radar shows generative AI applications maturing across several time horizons — from now to 6+ years out.

For Publishers: Create short-, mid-, and long-term roadmaps for AI tool adoption. Prioritize near-term opportunities (content summarization, translation) versus long-term investments (agent-assisted peer review tools).

Agentic AI on the Rise

By 2028, 33% of enterprise software may use agentic AI — systems that don’t just suggest, but act.

For Publishers: Autonomous agents could help triage submissions or assist authors in real-time — but only if properly constrained. Governance, transparency, and editor trust must evolve alongside capability.

Enterprise AI Readiness Challenges

Despite strong belief in AI’s transformative potential, 71% of organizations are stuck piloting GenAI projects due to knowledge, process, or infrastructure gaps.

For Publishers: Even large publishers can face fragmentation across editorial systems, legacy infrastructure, and uneven AI literacy. Identify gaps now to prevent wasteful investments , and ensure cross-functional ownership to avoid silos.

Build versus Buy versus Partner Strategy

Organizations must rethink whether to build in-house tools or buy from or partner with vendors. The “F1 Pitstop” framework encourages strategic pauses to assess choices.

For Publishers: Make timely, thoughtful choices about which tools to buy, build, or partner with, considering risks, costs, and control. Use a unified decision matrix to evaluate options for each use case and reassess investments quarterly.

AI “Did Not Finish” Failures

AI initiatives can fail due to overbuilding, misaligned partnerships, staffing gaps, or financial misjudgment.

For Publishers: Implement phased deployments to test value early (limited beta with 10-50 journals) and avoid technical overreach: don’t build what you can’t maintain or scale. Define exit plans in AI vendor contracts at the beginning.

4. Technology and Evaluation

Knowledge Graphs as Strategic Foundation

Traditional databases aren’t optimized for modern AI workflows. Graph-based data structures allow for more intelligent, contextual, and dynamic data exploration.

For Publishers: Scholarly publishing operates on complex networks — a natural fit for graph models. Knowledge graphs can improve AI-driven reviewer matching, citation networks, fraud detection, and information discovery.

Multi-Factor AI Model Evaluation

Microsoft shared a framework using multiple metrics: fluency, coherence, ‘groundedness’, relevance, and similarity.

For Publishers: This is highly applicable to LLMs used in manuscript assessment, summarization, or language polishing. Invest in contextual, content-aware evaluations instead of relying on generic metrics.

Cost and Carbon Impact Considerations

AI growth brings higher computing and environmental costs which currently aren’t sustainable.

For Publishers: Evaluate the return on investment for each model, optimize where possible, and consider sustainability in AI strategy.

Intelligence Getting Cheaper

The cost of delivering useful AI performance is dropping rapidly, especially for inference.

For Publishers: More publishers will be able to integrate AI at scale for applications like real-time Q&A, classification, copyediting, or decision support. AI will become a core part of operational budgets.

5. Opportunities and Monetization Strategies

Three Monetization Pathways with Proven Returns

  1. Enhance, Don’t Replace:  Successful AI implementations enhance existing workflows. Publishers are building AI-augmented discovery layers, semantic search tools, and research assistants without undermining the value of the underlying content.
  2. Accelerate the Content Pipeline: AI tools that accelerate editorial processes deliver internal efficiency and external value. Several publishers reported 60-90% reductions in time-to-publication while maintaining quality standards.
  3. Repurpose Your Content Assets:  Repackage content through summarization, translation, visualization, and personalization. By maintaining control of how AI interacts with published works, publisher can prevent disintermediation while creating differentiated products.

Unlocking Revenue Streams and New Products

Generative AI can revolutionize scholarly publishing through:

  • Enhanced Content Accessibility: AI-driven tools for summarization, translation, and metadata optimization can make research more discoverable globally.
  • Personalized User Experiences: Intelligent agents can curate tailored recommendations for researchers, institutions, and libraries.
  • Automated Workflows: Tools like AI-assisted peer review and manuscript triage reduce bottlenecks in editorial processes.
  • New Product Offerings: Publishers can develop AI-powered research assistants or dynamic knowledge platforms for academic institutions.

Managing Content Risks and IP Protection

While generative AI opens doors for innovation, it also presents risks:

  • Content Cannibalization: LLMs trained on proprietary scholarly content risk devaluing original publications by generating derivative works without proper attribution.
  • Data Privacy and Compliance: Misuse of sensitive author or reviewer data could lead to reputational damage and legal challenges.
  • Quality Control: AI-generated summaries or citations may introduce inaccuracies or “hallucinations.”
  • Fragmented Deployments: Siloed AI initiatives may lead to inefficiencies and inconsistent user experiences.

Moving from Defense to Offense

The shift from defensive posturing to strategic offense was a striking theme throughout the conference. Publishers gaining competitive advantage are those integrating AI into their core value proposition — not those attempting to insulate themselves from change.

For Publishers: Leverage unique data assets, domain expertise, and trust to create AI products that enhance rather than replace the scholarly ecosystem.

Conclusion

To turn generative AI from a potential threat into a competitive advantage, it’s essential to balance innovation, governance, and strategic focus. Overregulation may discourage adoption, while under-regulation risks ethical lapses. As generative AI tools become more commonplace throughout the writing process, maintaining editorial integrity and transparency is paramount.

Current AI agents are still in the early stages, primarily focused on task execution, information retrieval, and calling other tools. As AI capabilities evolve, governance and compliance frameworks must keep pace, especially with emerging national regulations.

By aligning innovation with governance and focusing on scalable use cases that protect intellectual property, publishers can unlock new revenue streams.

Hong Zhou

Hong Zhou

Hong Zhou leads the Intelligent Services Group in Wiley Partner Solutions, which designs and develops award-winning products/services that leverage advanced AI, big data, and cloud technologies to modernize publishing workflow, enhance content & audience discovery and monetization, and help publishers move from content provider to knowledge provider.

Pascal Hetzscholdt

Pascal Hetzscholdt is Senior Director of AI Strategy & Content Integrity at Wiley.In this role, he supports legal and technical initiatives related to AI strategy, governance, and implementation. Pascal’s activities related to AI center on three core areas: establishing robust guardrails to ensure a balance between tech innovation and content integrity; compliance auditing, and the responsible use and integrity of content in AI model training and outputs, and; safeguarding content security, integrity, and intellectual property rights in the context of the development and deployment of AI services.

Discussion

Leave a Comment