Each year, I make a point of reading tech-trend reports outside our lane. Scholarly publishing and education have their own constraints, but technology waves rarely start here. They break first in industries that live and die by speed, automation, and customer experience — finance, commerce, and marketing. When those sectors begin redesigning for agent-mediated discovery or autonomous operations, I treat it as an early signal of what will reach our workflows next — just with different risk tolerances and trust requirements.

With that lens, I recently read eight impactful technology trend reports from Bain, CB Insights, Deloitte, Gartner, IBM, McKinsey, and Kantar, and SaM. What surprised me wasn’t any single prediction. It was the consistency of the underlying shift across different sectors and authors and audiences. The same message surfaced repeatedly:

AI is no longer a feature. It’s becoming infrastructure — and the unit of value is moving from “a better tool” to “a better system.”

That matters a lot for scholarly publishing and learning/e-learning, where our “systems” (submission → review → production → dissemination; or authoring → teaching → assessment → credentialing) are deeply intertwined, highly regulated, and trust-sensitive by design.

Below are the four themes I saw repeated most often, what I think they mean for our industries, and where the reports quietly disagree.

mockup of a virtual screen with a human manipulating digital objects in an AI workflow

1) Agentic AI is real — but “redesign, don’t automate” is the price of entry

Multiple reports describe the same pattern: organizations get some productivity from copilots, but the real gains come when AI is embedded into end-to-end workflows and paired with process redesign and data cleanup. Bain describes “tech-forward” enterprises moving from pilots to measurable profitability by scaling AI across core workflows — and then shifting focus again toward agentic AI as a “structural” change that can redesign how work gets done.

Deloitte puts a number on the gap: 38% piloting agents, only 11% in production, and it links failures less to model capability and more to automating broken processes.

McKinsey frames agentic AI as “virtual coworkers” that can autonomously plan and execute multi-step workflows — powerful, but still early relative to more established trends such as semiconductors, advanced connectivity.

And CB Insights adds a detail most trend reports skip: measurement. It captures leaders admitting that agent ROI “is not a precise science,” and it highlights a growing market of tools and platforms trying to quantify agent impact and help vendors monetize agent activity.

What that means in publishing and learning

In our industry, “agentic” doesn’t mean “a bot that writes a summary or discovery assistant that search information.” It means a system that can reliably move work across boundaries (tools, teams, vendors, policies) with appropriate oversight.

If I translate the “redesign, don’t automate” lesson into publishing and learning, it becomes four practical moves:

  1. Pick one end-to-end workflow where the handoffs are the pain.
    For publishers: submission triage → reviewer selection → decision support; or production QC → metadata validation → downstream delivery.
    For learning: content assembly → alignment to outcomes → assessment generation → feedback loops.
  2. Define the controls before you define the prompts.
    What must be logged and monitored? What requires a human decision? Where do we need reproducibility and explainability (e.g., compliance, appeals, integrity investigations)?
  3. Treat “agent ROI” as operational design, not a dashboard problem.
    If you don’t know the baseline cycle time, rework rate, and escalation path, you can’t credibly claim improvement — and you’ll end up with automation that moves faster… in the wrong area or direction.
  4. Clean data and a true system of record
    Agentic workflows are only as reliable as the data they run on. That means consistent, clean data pipelines across workflows and partners—and a protected system of record that goes beyond content and metadata to include the governance layer: audit trails, permissions, policy enforcement, and version history. Done well, this also strengthens proprietary data advantages.
  5. Interoperability as a prerequisite, not a bonus
    For vendors and service providers, interoperability isn’t a “nice to have.” It’s the only way agentic workflows can move across tools, departments, and partners without turning into a brittle, one-off integration that breaks the moment anything changes.

2) Compute is the new constraint — even as models get cheaper

One of the most useful “tensions” in this stack of reports is Deloitte’s blunt observation: token costs fell dramatically (it cites a 280-fold drop in two years), yet some enterprises now see monthly AI bills in the tens of millions, because usage is exploding faster than unit costs decline. It also describes a shift from cloud-first to hybrid (cloud for elasticity, on-prem for consistency, edge for immediacy).

McKinsey makes the scaling problem even more concrete: demand for compute-intensive workloads is stressing infrastructure — power constraints, deployment friction, and real-world bottlenecks that are not solved by better prompts. In era of AI, electricity is rapidly evolving from a standard utility cost into a strategic, high-value commodity.

Gartner bakes this into its trend set by foregrounding AI-native development platforms and AI supercomputing platforms — essentially: build for AI as a default, not as an add-on.

What that means in publishing and learning

For years, many of us have priced and planned technology as if costs were mostly fixed: platform licenses, content tools, vendor services. AI breaks that assumption. The shift I’m seeing (and expect to accelerate) is from “feature cost” to unit economics — what it costs to complete a real workflow step, not to ship an AI capability. Think cost per manuscript triaged, per integrity case handled, or per learner feedback cycle.

That shift also raises the bar on measurement. ROI frameworks are becoming essential, not optional — because they determine which workflows justify automation, where human oversight is worth the expense, and whether we should build, buy, or partner. They also shape commercial expectations: if customers can measure outcomes, they’ll expect vendors to price to them.

Once you can see where inference spend concentrates, you can make smarter design choices — smaller or domain-specific models where they’re sufficient, and human-in-the-loop checkpoints where they prevent expensive downstream rework. And pricing is already bending in the same direction: away from traditional seat- or volume-based SaaS and toward value-based models tied to outcomes (cycle time reduced, rework avoided, throughput increased), because AI makes usage less predictable — but impact more measurable.

3) Trust is no longer a policy document — it’s a product and architecture

Gartner’s 2026 trends include digital provenance, preemptive cybersecurity, AI security platforms, and confidential computing — essentially: trust mechanisms built into the stack, not bolted on.

IBM arrives at the same destination from a different route: it anchors “trust” in consumer expectations. It reports that 89% of consumers want to know when they’re interacting with AI, and that trust drops sharply and about two-thirds of consumers would switch brands if a company intentionally concealed AI’s involvement in their experience.

SaM’s overview puts more detail around the same building blocks (confidential computing; blockchain-based provenance; and the move from reactive security to more predictive approaches).

And Deloitte adds a speed dimension: security models designed for perimeter defence won’t hold when threats (and defences) operate at machine speed.

What that means in publishing and learning

In our industries, trust is the foundation and the product. It’s why peer review exists; why learning outcomes matter; why we care about provenance, authorship, and assessment integrity; Why manuscripts can’t be uploaded to unauthorized AI tools.

The reports collectively suggest a future where trust won’t be “explained” — it will be naturally embedded inthe workflow and demonstrated via system behaviors and artifacts:

  • Clear AI disclosure patterns in user experiences (when AI is assisting, what data it used, what it didn’t do).
  • Security and peer-review integrity control should assume AI is both an attack surface and a defence tool.

Today, I don’t think AI’s trust problem is mainly that it’s not 100% accurate — humans aren’t either. The deeper issue is an accountability gap: when an AI system is wrong, it’s often unclear who bears legal responsibility and what remedies exist. My expectation is that the legal and regulatory frameworks we’ve built around human decision-making will adapt — slowly, unevenly, but inevitably — to the realities of AI-mediated work.

 If we get this right, we can move beyond “AI anxiety” to “AI accountability.” If we get it wrong, we’ll trigger a predictable backlash: hidden automation, opaque decisions, and a rapid loss of trust, exactly the pattern the recent Moltbook experiment surfaced when novelty hit real-word security and governance.

4) Sovereignty and Geopatriation are becoming day-to-day architecture decisions

A few years ago, “data sovereignty” was often discussed like a niche compliance topic. These reports suggest it is becoming a mainstream strategic constraint.

IBM states that 93% of executives say they must factor AI sovereignty into their 2026 business strategy.

Gartner goes further with the concept of geopatriation — planning where digital workloads live to balance sovereignty, agility, and resilience (including governance controls and ongoing monitoring of geopolitical and regulatory risk).

McKinsey similarly notes intensified regional and national competition around critical technologies, with pushes toward sovereign infrastructure and localized capabilities.

What that means in publishing and learning

Publishing and learning platforms are global by default: authors, reviewers, editors, institutions, learners, and content delivery all cross borders. But the infrastructure and data rules increasingly do not.

This is where I think business and technology leaders in our space need to plan ahead and get very concrete. “Sovereignty” shows up as questions like:

  • Where does identity data or AI model live (and where can they legally be processed)?
  • Can models be trained or fine-tuned on certain content in certain jurisdictions?
  • Can we offer consistent AI experiences globally without fragmenting the product or technology?

In practice, I expect more “regionalization” of AI capabilities — not always visible to end users, but present in data residency, safety constraints, vendor choices, and operational policies. Several countries including the US, China, EU, the UAE, Japan, and South Korea… — are actively investing in “sovereign AI” strategies.

The convergence across these reports is encouraging; it makes the signal feel stronger than any single forecast. But the disagreements matter too. Where the reports clash is often where the hard questions live — and where we can learn the most about what’s still uncertain.

I found two “soft contradictions” worth calling out

  1. A) “AI progress is unprecedented” vs. “foundation models may plateau”

Bain emphasizes the speed of progress and the expanding ecosystem around context, connectors, and agent-to-agent protocols. On the other side, Deloitte, in its “signals,” asks whether foundation models are reaching a plateau — suggesting that deployment quality and engineering strategies may matter more than chasing the newest model.

I think both can be true. Model capability gains may become more incremental—partly because high-quality training data is finite, and partly because today’s dominant AI architectures aren’t inherently designed for grounded casual understanding and long-horizon, real-world planning the way humans are. As same time system capability/applications (tools, workflows, orchestration, interoperability) keeps accelerating. For publishing and learning, these cut both ways. The downside is that today’s AI still isn’t consistently reliable for high-stakes reasoning work such as manuscript triage, integrity screening, or initial review without careful guardrails and human oversight. The upside is that our differentiators are rarely “who has the biggest model.” They’re more often about who can deliver the most trustworthy, integrated workflow, with the right transparency, controls, and domain experts in the loop.

  1. B) “Everyone wants agents” vs. “Few can measure them”

CB Insights captures the struggling with measuring AI agent ROI in executive voices. Bain argues leaders have “cracked the code” on AI ROI — but it also implies that level of maturity requires playbooks, data curation, and governance discipline many organizations don’t yet have.

The explanation might be maturity segmentation: a small group can measure ROI because they redesigned workflows with efficient evaluation and feedback pipelines; many others are still trying to measure “agent impact” without changing the system the agent lives inside.

Overall, the 8 reports largely agree on the direction of travel which AI is shifting from helpful tool to operational actor. The open question—the one I’d genuinely like to hear this community debate—is more specific:

In scholarly publishing and learning, what is the minimum “trust stack” an agentic workflow needs before we’ll accept it at scale?

Because if AI is becoming infrastructure, trust isn’t a layer we bolt on. It’s the foundation — and the first thing our communities will notice when it fails.

Disclosure: AI assisted with report summarization and editing; the analysis is mine, and facts were verified against the original reports.

Hong Zhou

Hong Zhou

Dr. Hong Zhou is VP of Product Management at KnowledgeWorks Global Ltd., where he guides product vision and strategy, leads cross-functional teams, and drives innovation across publishing solutions for researchers, librarians, and publishers worldwide. Previously, he was Senior Director of AI Product & Innovation at Wiley, defining AI strategy and leading the roadmap. He helped shape Wiley’s AI ethics principles, advanced the Wiley Research Exchange and Atypon platforms, and led development of Wiley’s first AI-driven papermill detection tool, which won the 2025 Silver SSP EPIC Award for Excellence in Research Integrity Tools. He is a recognized industry leader in AI, product innovation, and workflow transformation. He also received an individual honorable mention for the 2024 APE Award for Innovation. He holds a PhD in 3D Modelling with AI and an MBA in Digital Transformation (Oxford University). He also serves as a COPE Advisor, Scholarly Kitchen Chef, Co-Chair of ALPSP’s AI Special Interest Group, and Distinguished Expert at China’s National Key Laboratory of Knowledge Mining for Medical Journals.

Discussion

Leave a Comment