When I first started attending the Electronic Resources and Libraries (ER&L) conference a decade ago, libraries working with publishers and technology providers to modernize the digital researcher experience was still somewhat disruptive and hip. Sessions about applying user-experience techniques to improving a library website or leveraging search engine optimization to advance new open source repositories were on the bleeding edge. This year’s program included topics that have become foundational for ER&L, such as demonstrating ways to measure library user behavior and adapt to changing research practices. These are increasingly well attended by librarians and content providers alike, despite their spin-off conference dedicated to library user experience, Designing for Digital.
While discoverability and user experience have become standard fare, I was struck this year by the overwhelming number of sessions focused on authentication and access. Given the many challenges in “content leakage,” it shouldn’t be a surprise that there was standing room only for sessions featuring new services or initiatives focused on plugging those gaps. While there was no prevailing consensus on the ideal methods for gating subscriber access and building a bridge to a more modern information experience, it was clear that organizations of all kinds are investing heavily in service-oriented solutions to the forces undermining library and publisher dominance as scholarly content access points.
The headline was that search and discovery have reached a level of maturity and status quo, yet our shared systems for authentication are out of step in today’s information economy and reinvented access controls are in immediate need of our collective attention. There was no obvious answer — some were advocating for new business models, some wanted a new approach to authority technologies, and others supported a total lack of paywall as a silver bullet — regardless, access was the hot topic of the conference. I counted more than a dozen talks on link resolvers, RA21, open access and other sales models. While proxy services and IP-based certification still rules the day, the buzz of the conference was pointing to a brave new future in access controls.
About half of the program showcased successful partnerships among libraries, publishers, and service / technology providers, that golden triangle. KBART (Knowledge Bases and Related Tools) was a key focus of such cooperative efforts, as getting metadata feeds and systems requirements right for user-friendly link resolvers requires close coordination across the supply chain. Of particular interest in a number of talks is the growing uptake of automated and customized KBART solutions, in anticipation of the upcoming NISO recommendations.
Directly confronting poor user experiences with current access controls, there were several case studies of libraries experimenting with “access broker” browser plug-ins, like Lean Library and Anywhere Access. Comfortable that users are willing to do a little setup with a browser extension or initial linkup with their library, the challenge now becomes the necessary marketing and communication to promote such solutions. Striking a balance between the type of hassle-free user experience many patrons come to expect of digital services, and the spaghetti-tangled reality of digital content platforms and library systems, was an underlying theme throughout the conference this year.
Single sign-on solutions were mentioned in many sessions, regardless of the topic, of which OpenAthens was certainly the talk of the town. There were several OpenAthens case studies and they earned some buzz in the exhibit hall as having some obvious benefits for everyone involved, in particular in combating fraudulent access and improving data security. I would predict OpenAthens is quickly becoming a necessity for publishers and service providers to offer library customers going forward.
Some impressive teamwork was demonstrated by publishers and libraries working with PSI and their IP registry, to detect, identify, and remediate points of fraudulent access. Framing their work as particularly focused on Sci-Hub and other illegal activity, a great deal of effort from publishers, like IEEE, and some universities, like UT Austin and Monash University, is reducing the time to identify and address a compromised user account. Findings from these experiences, however, were disconcerting, as we are all in a reactionary stance, playing whack-a-mole with criminals. Even obvious solutions like dual-factor authentication can be undermined by hackers who have control over the cell phone number associated with a given account.
More comprehensive remedies that could potentially be game changers for libraries and publishers also have drawbacks, posing challenges to other priorities, such as privacy policies and library user experiences. The most dramatic solution to Sci-Hub was raised during the question/answer period after these presentations, both in person and online. Several librarians voiced their support of open access models, which remove the need for access controls of any kind. These conversations carried on into other events and added to the overall buzz around the University of California decision to cancel their Elsevier subscription.
Speaking of cancellations, one of the more interesting sessions came toward the end from Laura DeLancey, Electronic and Continuing Resources Coordinator at Western Kentucky University. She shared her library’s experience with offsetting budget limitations and offering alternatives to traditional subscriptions with pay-per-view services through their inter-library office. While 35% of ILL requests could be fulfilled with existing holdings, open access versions, and resource-sharing, a majority of document-delivery requests were fulfilled through either a token model from Taylor & Francis as well as the aggregated solution from the Copyright Clearance Center, GetItNow.
DeLancey complemented these perspectives with early findings from a survey she conducted with colleagues, exploring what best practices were evolving around such document-delivery options in academic libraries. Among the 67 libraries surveyed, DeLancey found that many were using PPV services heavily mediated by ILL staff and that the majority found these programs to be very successful. It seems these article-level sales models, whether direct from publishers or aggregated by third parties, are poised to further disrupt traditional subscriptions and big-deal offerings.
On the discovery front, highlights included Semantic Scholar marketing directly to libraries with their own booth in the exhibit hall this year. Other new and noteworthy items included the upcoming launch of Innovative Interface’s new discovery service; rather than the next generation of Encore, Innovative Discovery will take a unique approach to leveraging a library’s existing MARC records and native linked data. It was very encouraging to see “symbiotic” success stories shared by libraries, publishers and the Wikipedia Library group.
Takeaways from all this for publishers? The return on investment for metadata optimization, discovery platform participation, and supply-chain teamwork is ever strong. Adopting an operational understanding of users’ information journeys and contextualized needs, tasks, and expectations pay off every time. Those organizations that prioritize KBART feeds and take a strategic approach to ensuring content discoverability will continue to be rewarded with usage, citations, and loyalty from readers and libraries. As is often promoted by ER&L, partnerships were a key thread throughout the conference, punctuating the critical nature of cross-sector collaborations in navigating the challenges at hand and the transformations ahead.
Discussion
6 Thoughts on "Of Paywalls and Proxies: The Buzz about Access at ER&L 2019"
My suggestion for access controls: Attack the motive. The mechanism will prove elusive, but the motive can be relatively readily overcome with enlightened monetization.
Great round up, Lettie! Super helpful. Thank you!
A lot that appears overlooked in these access control discussions is the user. Who exactly is the user? Will they jump through the hoops to get it to work, or just go to Scihub?
When you talk to them in focus groups it turns more into this Oatmeal cartoon more than anything else (last box in it NSFW) https://theoatmeal.com/comics/game_of_thrones
Thanks, Lettie, for the useful summary.
I have to say that one of the things I was most struck by at ER&L was something that was missing: the cost discussion. From the library side, there was little talk about the fully-loaded costs of implementing and using new access and authentication schemes; or the opportunity costs of having institutional access de-activated while the Sci-Hub pirate chase ensues; or more importantly, some framework for considering how to measure the value of those activities over the time.
The omission of cost discussions was even more glaring from the publisher/content provider side. All of the things you helpfully summarized have costs associated with them: from KBART or COUNTER 5 compliance; to ensuring content is indexed *appropriately* in the expected discovery services; to supporting OpenAthens, RA21, and other emerging authentication frameworks; to providing text and data mining services for the content licensed. There is an unspoken expectation amongst libraries that all of these things should be supported, but the costs associated with that support are simply “the cost of doing business” for the provider.
Publishers/content providers need to take opportunities like ER&L to be more transparent about the costs they are incurring to support these initiatives and set expectations with their customers about how those costs will be reflected in their pricing.
How do we expect to support new initiatives like these in an Open Access ecosystem? Do we magically expect APCs to increase annually to provide the financial support for these wonderful new ideas? Are we going to bifurcate the content from the services in such a way that the APCs will cover the content access (and publishing), but the libraries will be expected to support the preferred platforms?
I worry about how innovation will flourish in our ecosystem if we continue to put our heads in the sand with regard to what things cost and the expectations that we should all have – together – to support them in a sustainable way.
Thank you for this article. I was struck this year by how OpenAthens-centric ER&L was. Our library, for complicated reasons, has both EZproxy and OpenAthens. Our plan is to transition entirely to OA at some point but the more I work through this process (trying to get it to work with OCLC), the more I question – why? We host our own instance of EZproxy and have been shut down maybe twice in my 2 years here. With an email in the morning, our access is re-established with the publisher. Other than that, EZproxy works incredibly well and I’m able to instantaneously fix any proxy issues without having to open a ticket with another service provider.
After an OA presentation I asked the presenter if they would have gone to OA if their proxy were working? Basically, what is the upgrade in proxying that comes with OpenAthens? She said she wouldn’t have switched and I’m having a hard time finding the benefits myself. While OpenAthens is certainly being pushed, I’m not entirely sure I’m sold on it.