Technology is changing how users access content in a world increasingly on the move, yet delivery of content acquired by libraries is optimized for the user who is physically on campus. What about the students or researchers who live, study, travel and work off campus? What percent of the patron base does that make up? What are the content service failures that develop as a result and how much are these failures costing libraries and publishers? These are significant problems for our community and for which the size and scale of these issues is growing.

broken delivery truck

Problems students face accessing content have been documented in the Kitchen by Roger Schonfeld and described by Michael Clarke in a humorous post about frustrations of the mobile user. An excellent SSP webinar in May outlined the issues and conversations underway in our community to address them. However, the community’s failure to deliver remains off the radar of most librarians and publishers who are unaware of the root causes or the impact of these issues. How can that be?

The user environment and expectations are changing. A growing number of students taking classes are not on campus much of the time and still need access to resources. A few details that highlight these changes include the fact that 60% of students in public universities are not housed on campus; while the figure is 36% for private universities. Additionally, 28% of enrolled students take at least one distance education course. Their seamless on-campus experience will break down, however, if they are off the campus network and trying to directly reach content.

Evidence of Frustration

This situation is not limited to students. According to Ithaka S+R’s 2012 Faculty Survey, approximately half of all respondents gave up and looked for another resource either ‘often’ or ‘occasionally’ when searching for a scholarly monograph or journal article if they were not able to get immediate access through their institution. Troublingly, more than 80% began their search looking for freely available versions online, presumably because of experience being blocked accessing content.

Bianca Kramer at the University of Utrecht in the Netherlands studied Sci-Hub usage data attributed to her institution and compared it with holdings data at her library. She found that “75% of Utrecht Sci-Hub downloads would have been available either through our library subscriptions (60%) or as Gold Open Access/free from publisher (15%).” While these data are not comprehensive, nor granular enough for certainty, she concluded that a significant component of usage of Sci-Hub was caused by problems of access and the desire for convenience by users. This conclusion was also supported by a related study by Michael Levine-Clark, Jason Price, and John McDonald that was just presented during one of the SSP 2017 Conference pre-meeting sessions.

What about the students or researchers who live, study, travel and work off campus?

These issues are further compounded by how most users are accustomed to working on the Internet today, as many of us are used to moving back and forth between email, social media, Google and other tools and documents and resources. Cloud-based systems have acclimated us to interacting online in a way that is untethered from our point of access. It is all seamless — or at least it should be — in the minds of our users.

Although service oriented librarians are happy to teach users what they need to do, needing to first sign onto the campus network to access content or services is out of step with user’s expectations and limits authorized access to content. The promise of anytime, anywhere access to digital resources breaks down for the users who are not on campus or signed into the campus network. Few untrained users would recognize or understand the behind-the-scenes systems mediation that is taking place allowing access when one is on campus, but preventing it when they are attempting to access the same resource off the campus network. Unfortunately, this disconnect isn’t visible to most librarians, because many disappointed users do not take the time to reach out to a librarian and register a complaint, or as many researchers do, according to the Ithaka report, simply attempt to access content that is freely available.

For publishers, this lost usage does not show as a “turn-away” connected to the campus IP addresses, because the user isn’t on the campus network. The usage simply isn’t showing up on their radar. And the cost to the publisher is that the library makes subscription decisions based on cost per use. If all the usage was counted, it could affect what journals and databases are retained.

The 2015 IPEDS data show a total of $3 Billion dollars in academic library materials budgets spent on content. For libraries, delivery failure manifests itself in lower usage, but also in diminished satisfaction with library services by patrons that feel, wrongly, that they don’t have access to the content they need for their work. The current trends in higher education point toward offering more online courses, which will mean more students will be remote. Many institutions over the past decade have also fostered greater regional and international collaborations (NCSU is one example, but there are many others), spreading out their teaching, faculty and institutional reach around the world.  On top of that, last year mobile internet usage surpassed desktop usage, showing the extent to which users are working with cloud based systems and mobile devices. This combination of factors means that the current situation will only get worse, and there are too many instances now where legitimate users fail to connect to content they are authorized to access.

Evolving Use of Cloud Services

Current expectations and behaviors have developed over time as our use of networks and the internet has changed, while the investment in advancing publisher access control systems has not. The paradigm that underpins most publisher authentication systems is one that is based on distinguishing between local networks, which are distinct and walled off from the wider internet. Years ago, most of the computer services that a person might need to use would be built into a Local Area Network or a larger Campus Area Network, such as file servers, email servers, or printer systems.

However, as the internet has grown, the difference between that which is inside versus what is outside one’s local network is fading and is not how most users experience the network upon which they work. Few of us use internally managed file servers to share documents with colleagues — we are more likely to use cloud services like DropBox, Google Docs or Microsoft’s Document exchange services.  Email services are generally cloud based, as are any of a number of other services including most of the traditional library management services such as ILS and discovery systems. Even traditional “big iron” computing services have moved to cloud-based computing services, such as Amazon Web Services or Microsoft Azure.

This is not to say that internal network services do not continue to exist, especially where security and control are paramount. But tethering users to a particular network to get their work done is increasingly anachronistic. Users, and especially their bosses, expect that people can get access to their work resources from anywhere, be it working from their home office, while visiting other facilities, on the road at conferences, at client’s workplaces, or collaborating with external project partners. For the most part, this untethered world is a reality for most consumer services, but unfortunately, providing access to subscribed content isn’t often one of these services.

For the first two-plus decades, providing access to web-hosted resources using information about where one was connecting to the network made sense. Publishers could reasonably provide access based on a user’s IP-address and therefore providing a seamless user experience. However, over the ensuing years, technology has advanced, connectivity has improved rapidly, and the new cloud-based network has little to do with one’s physical location. This approach has hindered the ability of library patrons to get access to the content to which the library has subscribed. Users should be able to have anywhere, anytime access to content or services in a way that fits their workflow, not in a way that suits the workflow of an antiquated network architecture.

Protection of User Privacy

Improving access needn’t mean that other values are set aside. And while more secure authentication and access provision does rely on greater user awareness, it is entirely possible, using SAML-based authentication protocols to provide security and privacy simultaneously. Many institutions both in the US and the UK are already providing content access via SAML-based technologies, such as Shibboleth and Open Athens, so there is experience balancing these concerns.

Security and privacy are compatible if designed and implemented thoughtfully

During the Electronic Resources in Libraries Conference in April 2017, Brian Helstien from the University of Southern California Libraries gave a great talk about how he manages control of privacy and user data in the library systems.  Although his talk was specifically about EZProxy, the notions of credentialing he discussed are as applicable to EZProxy as they are to identity management systems such as SAML. When a user logs into a SAML-based authentication system, the system generates an authentication token, which is shared with the patron. It is possible to also share attributes about that person, such as faculty, student or alumni. The system does not however identify the user individually. In this way, the system can also provide granular attribute tracking and possibilities for analysis, but retains the anonymity of the user.

Ken Klingenstein, Internet2, spoke at the December 2016 Coalition for Networked Information (CNI) Meeting and described how this privacy-protecting yet granular sharing of patron identity information can be done, including as a reference, implementation of the service currently being deployed at Duke University. It is clear that a more secure system of providing access is possible in a way that masks a user’s identity while browsing content online and therefore protects the patron’s privacy. Security and privacy are compatible if designed and implemented thoughtfully. There is a framework on protecting privacy in publisher and library systems, developed by NISO, that could serve as a guide for further development in this area (full disclosure, one of the authors of this post is the Executive Director of NISO).

Publishers and librarians need to acknowledge that when it comes to serving users electronic resources, for a variety of reasons, we are failing to deliver. A system that functioned well at the outset of electronic content delivery is showing its age, because the user community and the technology have advanced. Much like a crumbling bridge or a broken water main pipe, the infrastructure we built so many years ago to provide access is starting to fail and patrons are having to find their way around those systems to get access to the content. It is time that publishers and librarians work together to address this problem.

Todd A Carpenter

Todd A Carpenter

Todd Carpenter is Executive Director of the National Information Standards Organization (NISO). He additionally serves in a variety of leadership roles of a variety of organizations, including the ISO Technical Subcommittee on Identification & Description (ISO TC46/SC9), the Coalition for Seamless Access, and the Foundation of the Baltimore County Public Library.

Judy Luther

Judy Luther

Judy Luther is President of Informed Strategies which provides market insights to organizations on innovative content and business models. A past president of SSP, she serves on the editorial board of Against the Grain and The Charleston Advisor.

Discussion

10 Thoughts on "Failure to Deliver: Reaching Users in an Increasingly Mobile World"

While the security and related issues are problematic, the issue is compounded by the time needed to conduct searches and effectively access the key materials needed. At the present time there are AI systems which can readily search an entire library’s journals, selectively organize and abstract sufficient information for a researcher to quickly zero in on key documents that may be of critical value.
Set in motion off site but effectively executed on site overcomes much of the technical so carefully documented in this post. The changing nature and capability of AI systems in support of scholarly work can quickly target, overcome much of the connectivity issues, and also reduce the need to spend time accessing materials that are often marginal at best.

Of course, it also reduces demand for access and downloading of materials, impacting on publisher revenues and readjusting use/demand statistics.

While perhaps not a direct comparison, I can’t help feeling that my experience reading this article was ironic.

When I clicked from my email alert to read this article, I received a full-site layover ad encouraging me to sign up for email alerts about your articles which I had to decline before I could read the article I was emailed about. This is a customer experience that technologically is simple to fix but most publishers don’t take the time to consider the full customer journey that leads to such an irritating cycle, which is doubly irritating given the topic of today’s article.

Connect users with content as seamlessly as possible – that should be the motto for any content provider. It only takes carefully thinking through the customer experience to do this.

Thanks Rachel, each reader should only see that overlay once the first time they come to the site, but as with everything on the web, we’ve had to go through a few technology iterations to get it to work properly. You’re right, it’s a great example–even something as simple as this is harder to get right than it seems.

FYI David, I see that overlay pretty regularly. I’ve reported it a few times.

I’m not super happy with it, thanks for getting it back on our radar. We had issues with it at launch and heard a lot of complaints, then things quieted down (I know I stopped seeing it). If folks are still seeing it regularly, I’ll poke our IT people.

I wonder of people just stop bothering to complain at some point … 🙂

I want to nitpick on one line: “However, the community’s failure to deliver remains off the radar of most librarians and publishers who are unaware of the root causes or the impact of these issues. How can that be?”

Librarians are acutely, frustratingly aware of these problems – to the point that a kind of hopelessness has set in amongst many who work to troubleshoot these issues, because it is such a constant battle. For example, my library is now on a summer break between sessions, but we’ve still discovered that one of our databases isn’t allowing access on Firefox unless the cache is cleared and the browser is closed and re-opened. This is a database to which we’ve had a long-time subscription! So it’s possible that publishers aren’t hearing librarians, up and down the publishing chain, when we are calling and emailing. I suggest that publishers might want to ask for an audit of tech issues reported. They might better understand why it is a librarian (Gabriel Gardner) who has studied SciHub quantitatively first.

Thanks for speaking up, Charlotte. There are multiple points of failures on the entire path that can prevent access. The quote above relates to the specific issues that the growing number of off campus users experience because they are not on the network – even though they are authorized to access the content.

Thanks Judy – I want to point out that the example I noted was actually ON campus IN the library. And this isn’t the first time that we’ve encountered things like this, either. Every librarian I know has sent/received a message at work asking, “If you are on Chrome or Firefox, can you please test….?”
That said, to your point, I can’t even imagine how many of our users off campus just quit instead of letting us know they have a problem.

Comments are closed.