Editor’s note: Today’s guest post is by Mark McBride, Director at Ithaka S+R, where he leads several strategic portfolios focused on some of higher education’s most pressing challenges.

The recent NISO Open Discovery Initiative (ODI) Survey Report on Generative AI and Web-Scale Discovery reads like a weather vane for higher education. It measures not just the barometric pressure of technical adoption, but the crosswinds of cultural fear, ethical concern, and institutional hesitation. The question it leaves us with is both urgent and unsettling: Do academic library leaders possess a strategy for responding to artificial intelligence in the library technology and publishing ecosystem?

At first glance, the data are reassuring. Libraries and publishers alike identify similar hopes for generative AI: improved visibility of content, more accurate recommendations, and time saved for staff. These are pragmatic ambitions and incremental enhancements to the work of discovery. Yet, beneath this surface consensus, the report uncovers deep fractures; fissures in values, incentives, and institutional readiness that could widen into crises of trust, equity, and sustainability if left unaddressed.

Futuristic Public Library

Shared Hopes, Divergent Fears

What libraries and publishers hope for is clear. They want students to find information more easily, for content to surface more effectively, and for staff to be liberated from some of the routine burdens of content management and curation. Generative AI, at its best, could act as a kind of conversational bridge between user and collection, helping a first-year undergraduate craft a bibliography or assisting a medical researcher in scanning unfamiliar literature across disciplines.

But when it comes to fears, the voices diverge. Libraries worry most about truthfulness: hallucinations, fabrications, summaries that mislead more than they inform. They fear the opacity of the machine, and what it might mean for students who may lack the critical skill set to parse good information from bad. Publishers, on the other hand, fear loss of ownership: the possibility that their intellectual property will be ingested into large language models without consent or compensation.

And yet, a common thread runs through both sets of anxieties: transparency. Libraries want transparency to preserve user trust; publishers want it to protect the value of their content. Transparency is the hinge on which this entire debate swings.

The Question of Value

The NISO survey shows that libraries, by and large, believe generative AI can enhance the value of discovery systems. Features like multi-article synthesis or the generation of subject bibliographies hold real promise for staff and users alike. But publishers express a different calculus. They are optimistic about visibility but doubtful about access. Their worry is that AI will surface their content only to leave it locked behind walls that frustrate users and shift blame onto the library.

This access-visibility gap is not just a technical issue. It points to a deeper structural tension that begs the question: will AI in discovery serve as an equalizer, broadening access to knowledge, or as an amplifier of inequity, shining a spotlight on content that remains inaccessible to those without resources?

Environmental Concerns

Given the direct impacts of climate change that so many libraries have experienced, from floods to fires in recent years, it is perhaps not surprising that this report surfaced deep concern about environmental sustainability.  Many pointed to the carbon costs of running large AI models. For decades, libraries have prided themselves on values of preservation, access, and sustainability, whether in paper conservation or digital archiving. To extend that ethic into the carbon accounting of discovery systems is both radical and possibly necessary.

The Policy Blind Spots

And yet, for all this moral clarity, the survey reveals gaping holes in institutional preparedness. Most libraries do not have clear policies about whether their repositories can be crawled by AI systems. More than half of respondents did not know their institution’s stance. At the very moment when publishers are scrambling to renegotiate their relationships with AI companies, libraries remain uncertain, perhaps even passive, about the status of their own data.

Similarly, while two-thirds of libraries are exploring AI features in vendor products, only about half are offering guidance to their communities on how to use AI responsibly in academic work. These gaps signal a troubling pattern: AI is already embedded in library systems, but governance, ethics, and literacy frameworks have not caught up.

As Rosalyn Metz observed in Digital Shift on Substack, libraries find themselves at the center of the AI debate, not because they sought the spotlight, but because their collections, metadata, and infrastructure form the connective tissue on which AI systems feed. The irony is sharp. The very cataloging practices built to foster human understanding have become the hidden scaffolding for machine prediction. Metadata, in particular, is the context AI craves; it shortens the leap between fragments, guiding models to generate coherence where otherwise they would flounder.

And yet, rather than approach libraries as collaborators, AI companies have too often behaved like opportunists, deploying indiscriminate scrapers that strain discovery systems and disregard both intent and infrastructure. The result is not innovation but exploitation, a hollowing out of trust and capacity. As the ARL/CNI’s AI-Influenced Futures scenarios warned, without proactive strategies, libraries risk sliding into a feudal information order in which a few powerful actors dictate terms of access and discovery.

But passivity is not destiny. Libraries could do more than defend their systems — they could reclaim their agency by becoming a publishing apparatus of the AI era. APIs with clear rules of engagement, mandates for attribution, experiments with new revenue models, even library-led AI discovery projects; these are not fantasies but viable pathways. The point is not only to resist strip mining, but also to demonstrate leadership. The question is whether libraries will cede this future to external powers or assert their longstanding role as stewards of knowledge, shaping AI to serve the public good.

What’s at Stake

If academic libraries fail to develop clear strategies, the consequences could be severe.

  • Trust erosion: If discovery tools return hallucinated results, the credibility of the library itself could be undermined. Students may come to see the library’s systems as just another unreliable search engine.
  • Student learning: Faculty worry that AI in discovery could short-circuit students’ ability to learn critical thinking and research skills. If machines generate bibliographies and synthesize texts, what happens to the intellectual muscle built by searching and evaluating sources?
  • Equity: Publishers’ optimism about visibility may not translate into improved access, leaving users aware of what they cannot obtain. This could deepen divides between well-funded and resource-poor institutions.
  • Financial sustainability: Libraries will need to budget for AI infrastructure, further straining finances. What happens when AI becomes the next “must-have” feature priced into subscription packages?
  • Environmental responsibility: Without deliberate choices, libraries could inadvertently support discovery systems that increase carbon emissions — undermining their own commitments to sustainability.

A Call for Leadership

The ODI committee has outlined constructive next steps: sharing findings, developing transparency guides, and convening stakeholders. These are necessary and important. But they are not sufficient.

What is required now is leadership — library leaders who can articulate not just hopes and fears, but strategies. Strategies that define how AI will be evaluated, adopted, and governed within their institutions. Strategies that balance innovation with ethics, opportunity with responsibility.

Such strategies might include:

  1. Policy frameworks that clarify whether and how institutional repositories can be used to train AI models.
  2. AI literacy programs that equip students and faculty to use these tools critically, without undermining core research skills.
  3. Ethical guidelines that address transparency, accuracy, equity, and environmental impact.
  4. Collaborative negotiation with publishers and vendors to ensure that AI features serve educational values rather than merely commercial ones.
  5. Cross-institutional partnerships to share resources, expertise, and advocacy — so that AI adoption does not deepen the divide between resource-rich and resource-poor campuses.

Conclusion: A Strategy for Integrity

Libraries have long served as sanctuaries of trust, institutions where truth is pursued as a public good rather than a commodity. In a fragmented information environment, this credibility remains one of their most valuable assets. Generative AI now complicates that role, offering efficiency and new forms of access while simultaneously introducing risks of misrepresentation, opacity, and diminished user confidence. The challenge is not AI itself, but how it is integrated and governed.

Since 2023, Ithaka S+R has been charting the ways generative AI is reshaping higher education. Again and again, our research points to libraries as institutions poised to lead, drawing on their traditions of literacy, policy, and access to anchor the academic community in a time of rapid technological change. Their role is not incidental. It is essential if AI adoption is to remain responsible, equitable, and true to the educational mission. And yet, in the absence of a coherent vision for AI at the institutional level, libraries are struggling to define their own strategy.  The NISO ODI report demonstrates that the profession is alert to both promise and peril. What it reveals most clearly, however, is the absence of coherent strategies for adoption. Libraries have begun exploring vendor tools and drafting guidance, but too often without comprehensive policies, clear principles, or frameworks that address equity, sustainability, and transparency. This leaves institutions vulnerable to vendor-driven trajectories and publisher priorities, rather than being grounded in their own values and mission.

The task ahead is therefore to move from reaction to strategy. Libraries must define governance structures for AI use, establish policies on content crawling and training data, and develop literacy programs for staff, faculty, and students. They must assess environmental impacts and negotiate collectively to ensure that AI features support scholarly access rather than merely amplifying commercial visibility. As my colleagues described in the Second Digital Transformation of Scholarly Publishing, AI may present great opportunities for the publishing ecosystem, it also threatens the credibility and equity of scholarly communication unless new strategies and policies are rapidly developed.

Ultimately, libraries are not simply collections but civic institutions that structure society’s relationship to knowledge. Generative AI will reshape discovery whether libraries act or not. The critical question is whether library leaders will develop strategies that preserve trust, equity, and sustainability — or risk ceding that responsibility to external actors whose priorities may not align with the academy’s.

Mark McBride

Mark McBride

Mark McBride serves as a Director at Ithaka S+R, where he leads several strategic portfolios focused on some of higher education’s most pressing challenges. His work cuts across the domains of teaching and learning, academic libraries, and the research enterprise, each shaped by the broader question of how institutions can sustain integrity and purpose amid accelerating change. A seasoned leader and systems thinker, Mark has helped universities and state systems reimagine how they organize, govern, and serve. At Ithaka S+R, he guides a team of researchers and analysts whose work seeks not merely to analyze problems, but to help institutions rediscover coherence, linking organizational strategy with human aspiration. His approach blends evidence with empathy, recognizing that transformation in education is as much cultural as it is structural. In his work with academic and cultural leaders, Mark invites organizations to orient toward resilience rather than control. He believes the most enduring institutions are those that, in the face of disruption, remember their deeper purpose. To lead through uncertainty, he often says, is not to impose direction but to help others discern their true north, the animating values that keep an organization both grounded and alive.

Discussion

12 Thoughts on "Guest Post — Do Academic Libraries Have a Strategy for AI?"

Thank you for the post. You might have written, “The very cataloging practices built to foster human understanding have become the hidden scaffolding for machine predation.” Instead of machine “prediction.”

Helen, I had to really think about this. That’s such a thoughtful observation. You’re right, “machine predation” adds a sharper edge that speaks to the extractive potential of these systems. While I meant “prediction” in the technical sense, your phrasing captures an important truth about how discovery infrastructures can be repurposed in ways that deserve our attention. Thank you for the thoughtful comment.

Hi Mark,
You suggest libraries can develop “experiments with new revenue models…” In a budget-constrained environment, this seems particularly interesting. Can you provide an example or two of what those new revenue models could look like?

Hey Bill, thanks so much for your question. As you know, higher ed is in a time when resources are stretched and every academic unit is being asked to demonstrate value. I do think libraries are uniquely positioned to generate new forms of revenue, not by selling services per se, but by creating shared service infrastructures or building off their existing shared services, where other units across the university or possibly an entire system can invest in and benefit from.

Libraries already serve as trusted conveners at the intersection of information, technology, and ethics. That trust can become the foundation for a new model of a shared services that both strengthens the academic enterprise and creates sustainable revenue pathways for the library. Possibly a system-wide AI and data services hub, for instance, could coordinate metadata standards and develop tools that support responsible research across campuses. Departments, centers, and research institutes could co-invest in and draw upon these capabilities, distributing costs while enhancing collective impact. These are just random thoughts; I’m sure you could come up with several others that may fit your institutional context.

I could see libraries leading cross-campus initiatives in AI literacy and digital research training, building a shared curriculum, maintaining the technology infrastructure, and enabling local units to deliver their own programming. In this model, revenue flows through institutional partnerships rather than individual transactions, with the library stewarding both the content and the collaboration.

Many, many librarians are indeed creating critical strategies around, for, and with the cluster of technologies here called “AI” – for example, Ciaran Talbott at the University of Manchester, who’s looking at everything from collections as data to new metrics and provenance collapse, and Aaron Tay, who’s been way ahead of the curve for genai/semantic search discovery. I think it’s only fair to mention that most librarians have little control over their own IRs, e.g configurations around crawlers or the ability to throttle. And given that CC BY licenses let anyone do anything with content, it’s not clear libraries would have obvious ways to control bot behaviour without blocking access to research outputs (the reason for IRs existing). I also don’t think it’s totally fair to point to a lack of AI policies for university libraries when often the universities themselves lack a meaningful framework for AI. Librarians are often told to wait for the university to develop positions before building their own strategies.

Monica, thank you for sharing your observations and for highlighting these examples. I wasn’t familiar with Ciaran’s or Aaron’s work; both are really impressive and inspiring.

You make an excellent point about policy development. Too often, libraries are asked to wait until their institutions establish formal policies before taking action. It reminds me a bit of the early days of the web, when many campus libraries were among the first to launch websites and, in doing so, helped lead their institutions into the digital era.

Of course, I’m not suggesting libraries should bypass institutional governance, but rather that this history shows how libraries can thoughtfully lead their campuses toward the future, sometimes by taking the first careful steps before the path is fully defined.

Potentially of interest to those reading this post is the newly released Viewfinder toolkit which came out of work funded by an IMLS grant. https://www.lib.montana.edu/responsible-ai/. Viewfinder is a participatory toolkit designed to facilitate ethical reflection about responsible AI in libraries and archives from different stakeholder perspectives. It can be useful in addressing item 3 on your list, “Ethical guidelines that address transparency, accuracy, equity, and environmental impact.”

I question that premise that librarians “want students to find information more easily, for content to surface more effectively, and for staff to be liberated from some of the routine burdens of content management and curation”. Students have no trouble finding information or surfacing content – they have trouble sorting and assessing its credibility. It’s unclear to me how AI will assist with this – unless we are asking it to do the work of analyzing the authority of the creator of the information etc. – which means that there is no work students need to do which leaves me wondering what the purpose of post secondary education is.

Great point, Katharine. You’re absolutely right that students rarely struggle to find information anymore. The real challenge is learning how to sort it, question it, and discern what deserves their trust and therefore their attention. In that sense, the problem of abundance has become the problem of authority.

But I’d also argue that this is exactly where libraries (and perhaps AI) can play a constructive role. The goal isn’t for AI to make judgments for students, but to possibly make the scaffolding of judgment more visible by showing patterns of bias, authorship, and citation that help learners understand why some sources carry more weight than others. Used thoughtfully, AI could help surface the context behind information, not just the information itself.

There is the tension that libraries hope to use AI to make discovery easier, but their deeper mission is to help communities think more critically about knowledge itself. If we approach AI as a tool for teaching discernment rather than automating it, we might not lose the purpose of higher education at all we might, in fact, rediscover it.

One thing I continue to find missing in much of the recent literature—including this piece—is meaningful discussion around the impact of AI on accessibility (A11y).

I think one answer to both of these concerns is link resolver integration into AI research tools. Of course this presumes AI tools will consistently provide citations, and the citations have to be real.

Leave a Comment