Editor’s Note: Today’s post is by Andreas Mace. Andreas is a Systems Librarian at the National Library of Sweden.
For a long time, libraries have had a vision about one unified system to run most, if not all, of its core functions. A back-end for library staff to work and a front-end the patrons use, looking for material. Historically, different resource management functions such as cataloging, acquisition, and circulation started off as single systems, which when the technology allowed, consolidated into the Integrated Library System – The ILS. But even as these unified systems emerged, other trends in technology and society were highly disruptive of this convergence, especially the emergence of the web and electronic content.
Looking specifically at the academic sector, the fragmented ecosystem of search engines, databases, and platforms that its users faced, with their lack of usability and comprehensiveness, made many libraries and technology providers dream of a unified search experience – a Google-like search that would encompass all the material a library collected, be it print or electronic. Web-Scale Discovery took the stage a little over a decade ago, although the “web-scale” part from their description was soon dropped. Around the same time Next-Generation Library Systems were envisioned to bridge the divergence of electronic and print. The Library Service Platform (LSP) took the scene shortly after the rise of Discovery, offering new, unified possibilities. The unified vision looked close to becoming reality.
However, society and technology are continuing to transform, changing the scene. For the academic space, all the flavors of open science are highly disruptive, while other trends such as the increasing use of mobile technology and social media, as well as the emergence of big data and AI-driven technology, are quickly driving change. And all point towards an increasingly intricate mix of software and services for both users and library staff to handle. Although the same is likely true for public and special libraries, the complex and diverse flora is perhaps most visible for the academic libraries. Institutional repositories, journal platforms, union catalogues, various software and repositories for research data, reference management software, collections of open access and transformative agreements, open educational resources, discovery systems, as well as print and electronic resource management – to name but those that come to mind readily. Personally, we are also more and more used to interacting with a multitude of software, systems, and apps in our everyday lives. On our phones, on our computers. The world is becoming more and more digitized and interconnected, yet variable with a multitude of different user interfaces.
Yet the old library dream of a unified library system remains strong. Ex Libris, the biggest technology services provider in the academic library sector for library management systems, is increasingly coupling its flagships Alma and Primo VE together. Most other large providers work towards similar unified platforms. Koha ILS, previously focused more on print material, recently created its own, internal ERM module rather than integrating with other more specialized software. Even the new open-source software platform FOLIO, where modularity is one of its key components, is often branded as a unified solution (though with choice among its movable parts).
Even how we talk about library systems is dictated by this unified vision. The otherwise very well-crafted and thought-provoking publication from OhioLINK and Ithaka S+R from a few years back, “It’s Not What Libraries Hold; It’s Who Libraries Serve”, is a prime example of this. Although the piece does mention the possibility of multiple systems and offers the idea of a “dis-integrated” library system, it mostly muddles the waters by talking about a library system, making little distinction of back-end or front-end functionalities, or the wide range of functions that such a system should support. Add some mentions of research data here, open access there, and soon it becomes apparent that the paper discusses the larger software landscape of academic libraries, rather than a single library system.
Why is the unified dream of library software still so strong among the library community? In an ever more diverse library landscape, why do we still envision and talk about the library system? And what are the alternatives?
Before moving on, some more background can be found in Marshall Breeding’s article “Smarter Libraries Through Technology: Integration and Interoperability Versus Unification”.
The Benefits of the Consolidated Monolith (and Its Inherent Problems)
The rationale for the unified vision is fairly obvious, and in many cases, it actually makes sense. Moving a lot of workflows and data into one piece of software makes it easy to manage (and get good statistics), while it supports generic workflows etc. For a lot of libraries, it is easy to see why this is desirable. For large vendors, the classic monolith (especially running under SaaS, Software as a Service) is also an easier way to manage operations. For main tasks, prevalent under those generic workflows, this kind of software often performs fairly well. So, in terms of functionality, they are obviously not bad. And while I’ve previously argued that modern technology and the web is increasingly fragmented and specialized, there are signs of large platform-building evident among the larger players: Facebook, Google, Amazon, etc. They differ from the library space though, in that they often build platforms, with in-turn specialized applications.
But taken all together, what is wrong with the unified vision in a focused profession such as librarianship? The answers are found in flexibility, user-centered methodologies, the importance of (local) workflows, choice over consolidation, and, last but not least, the strategic importance of taking control of basic infrastructure.
First off, the monolith may be good, but it is seldom great. And because of its size, it is often slow moving. In the increasingly fast-moving technological and social development in the world, having (academic) libraries rely on a few large systems, with low adaptability, is not a strategically viable path, especially considering changing user behavior. But more increasingly, the interest and expanded practice of user experience (UX) work in libraries, depending on the needs of its users, does not resonate well to proprietary vendors and SaaS-models where development is greatly distanced from local change and adaptation. The risk here is two-fold: libraries may fall behind as other technological advances (especially driven by big data or artificial intelligence) surpass it in performance, while at the same time be stuck in rigid systems that aren’t flexible or adaptable enough to meet changing behavior and needs, leaving little room to improve out of a local context.
Similarly, the importance of local, as opposed to generic workflows is a dividing factor. Large, unified systems are often built around a set of generic workflows, meant to cater as many libraries as possible, but where local changes are difficult, if not impossible to implement, leaving only two choices: workarounds or accommodation to the required workflows. Just how important are those local needs and how much do our workflows differ? Some are obviously universal among libraries, no matter size or type, but that there are regional, national, or local differences – and that those differences can be of great importance – should influence our choice of systems. It is also an important question whether a few vendors should dictate how we manage our workflows, or if the systems we build/procure should support the workflows we create.
The ability to choose is another aspect that undermines the unified vision, as there is an increasing consolidation among library technology providers, as well as related sectors like for instance higher education. That providers also bundle their software together makes for a more homogenous marketplace, providing serious lock-in effects. Likewise, there are significant silo effects from building software based on a unified, library-specific vision and catering to a limited number of pre-designed workflows, where as I have discussed previously the academic software landscape increasingly revolves around other parties in the wider context of academic infrastructure, with scientific publishing, researcher workflows, publication of research data, etc., which involves other stakeholders at universities and academic institutions. There are similar problems in many of these areas, with bundling of software (if not outright creation of large monoliths), and severe lock-in effects.
The risk for libraries with the unified vision is also here dual: that the library specific software is consolidated into a few large systems that offer little in form of choice, flexibility and adaptability, while it also potentially marginalizes the library workflows and possibilities of data exchange in the larger academic landscape, where there is a higher degree of innovation and available resources (together with an ambition to deliver “greater seamlessness across the full research lifecycle”).
Taken all together, these factors point towards an increasing need for libraries to take control of their infrastructure, or at least parts thereof. Using open-source software can be one such aspect, as can working collaboratively on open projects to improve library or scholarly infrastructure.
A Modular Library Systems Landscape
So, what are the options for libraries looking for other choices than large, unified systems? The obvious answer would be to look at the opposite side of the spectrum, towards smaller, more focused systems or components (or “loosely coupled discrete services”, as Roger C. Schonfeld elegantly phrased it in an issue brief). Together these software and services would form a modular systems landscape. We’ll look more closely on the merits and flaws of such a strategy, and how it might be implemented, in a second part of this guest blog post.