Even when they are well-marked, obstacles impede research and drive away researchers (Pedestrian obstacles, High Road, Buckhurst Hill, by Robin Webster, CC BY-SA)

Scholarly publishers and content providers have made great strides in the design and usability of their online platforms in recent years. Some of these developments have come as a result of the work of underlying platform providers, such as those managed by HighWire and Atypon, while others have involved developments in proprietary platforms. At the same time, the effects of these improvements have been modest relative to the needs of researchers.

One important dynamic, which is all too frequently ignored, is that content providers operate in the systems ecosystem of the licensing academic library. This ecosystem incorporates a number of services and intermediaries, and, for all the creativity that has gone into it, it nevertheless has real weaknesses. Content providers have been challenged to integrate their offerings as seamlessly into this ecosystem as would benefit researchers.

In a recent Ithaka S+R issue brief, I wrote about some of the various stumbling blocks that off-campus users, especially, experience in using licensed e-resources. Even for common research workflows, when off campus, researchers can be required to click through seven or more webpages in order to gain access to an article or book that they have already discovered. Similarly, mobile devices all too often are poorly served by site design and the web apps that have been created for them, surprising given that libraries want to invest in mobile solutions and the strategic opportunities they face to incorporate the sensors and services that these devices offer. These challenges can be found across the range of content providers, libraries, and various intermediary services, and collectively I believe they are driving users away from licensed e-resources and towards open access materials.

In one view, the obvious solution is for libraries to redouble their advocacy for open access solutions, which are definitionally immune from authentication challenges. Without taking anything away from the advantages of open access here, authentication is only one piece of the puzzle. Both licensed and open resources share access shortcomings around discoverability, personalization, and usability.

To adapt, publishers, libraries, and intermediaries need to examine not only the usability of their own platforms and how they can continue to be improved, but also how they are in practice used in scholarly research alongside other platforms and services. To do so, they cannot bring researchers into their usability labs, but instead they must engage researchers in their workplaces, in campus offices, labs, libraries, and dorms, and equally in off-campus homes and housing.

At the same time, some of these issues cannot be solved by individual publishers. For example, new authentication approaches might require a common framework or provider, whether something organized on a community basis through NFAIS, NISO, or the Shibboleth Consortium, or on a more proprietary basis through a service like Mendeley, ReadCube, or ResearchGate. In my view, access stumbling blocks offer additional evidence that a completely different approach to user accounts would be beneficial.

There are some notable alternatives. For example, EBSCO and ProQuest are assembling the building blocks to develop more seamlessly integrated ecosystems. Their content platforms contain large portions of the books and serials that are typically required for undergraduate education, potentially limiting the number of transitions required across content platforms, and they offer a variety of library management systems and discovery tools, including index-based search services, knowledge bases, link resolvers, and more. Understanding these providers not only as components of the complex and difficult academic systems ecosystem, but potentially as alternatives to it, is an important strategic consideration for libraries and publishers alike.

Ultimately, researchers face unnecessary frustrations due to outdated systems and interfaces, cobbled together as much as they are designed. As distance learning continues its inexorable growth and research practices continue to anticipate always-connected devices, it is becoming more urgent for libraries, content providers, and other intermediaries to address these problems.

Roger C. Schonfeld

Roger C. Schonfeld

Roger C. Schonfeld is the vice president of organizational strategy for ITHAKA and of Ithaka S+R’s libraries, scholarly communication, and museums program. Roger leads a team of subject matter and methodological experts and analysts who conduct research and provide advisory services to drive evidence-based innovation and leadership among libraries, publishers, and museums to foster research, learning, and preservation. He serves as a Board Member for the Center for Research Libraries. Previously, Roger was a research associate at The Andrew W. Mellon Foundation.


24 Thoughts on "Dismantling the Stumbling Blocks that Impede Researchers’ Access to E-Resources"

Perhaps I am over thinking the metaphors, Roger, but you seem to be calling for integration on a massive new scale. Is that it? I would not describe the way the world presently is as an outdated stumbling block, although I am sure there are opportunities for improvement. Proposals need to be spelled out and justified, in my view anyway. What are you proposing?

Some recommendations are included in the issue brief that is linked from this post.

Sorry, Roger, but when a blogger offers the “go read this and get back to me” gambit I tend not to take it. If you are proposing to “dismantle the stumbling blocks” you should say how. The discussion below suggests that you propose a universal identity system. I that it? Everyone gets a number?

Roger, I’m curious why you don’t mention ORCID as the most logical choice for a “common framework” for authentication in the scholarly publishing context?

ORCID would need to expand to cover all users and not just creators, and to move into the authentication space. It would be fascinating to see ORCID expand in these directions.

ORCID is not limited to content creators – anyone can register for an ORCID.

ORCID is based on OAuth2, which is probably the world’s most widely-used distributed authentication protocol (for example, it’s used by FaceBook). It’s already in the authentication space.

Further up the supply chain during manuscript submission and peer review hundreds of journals already provide single-sign-on using ORCID. For example you can log-in to both PLOS and eLife for manuscript submission using your ORCID even though the competing organizations use different systems with different databases.

Obviously, further initiatives from ORCID to enable federation (using SAML) would be beneficial, but if you’re looking for an infrastructure solution for authentication it arguably already exists in ORCID.

It seems like ORCID could be fine for authentication (though ORCID does have the problem that claiming a profile seems to have no safeguards to confirm that the person doing the claiming is indeed that person?) but ORCID doesn’t seem to have the necessary infrastructure for authorization of access to resources (content, software, network, etc.). Am I missing an aspect of the service that would enable this?

Identify validation is indeed a problem. To address this, ORCID set-up a “multiple assertions working group” (MAWG), but I don’t know how much progress they have made.

The software and networking to validate an ORCID are all in place. But, the level and scope of access granted to the proven owner of an ORCID (authenticated via OAuth2) is a matter for that local system, and need not, would not, be controlled by ORCID or other central entity.

Many university campuses employ single sign-on systems that draw from an authentication server. A Learning Management System (LMS) and a Student Information System (SIS) are thus able to identify and respond to students, staff and faculty by role, departmental affiliation and other variables. These extant facilities present very tempting opportunities for integrators to design ways and means for providing unfettered but customized access to informational resources.

These systems typically are connected to a proxy server or a Shibboleth implementation for authentication for licensed e resources. Both have their shortcomings in current implementations, as I discuss in the issue brief.

The CCC is one organization that has worked for many years on systems that allow for obtaining permissions for use of copyrighted materials without disrupting the user’s workflow, i.e., having to go to some other site to obtain permission. Perhaps this kind of system could serve as a model for what Roger has in mind.

The cases I am focusing on are those where three materials are already licensed for the researcher’s use. Authentication is the sole problem. What is CCC’s solution in this space?

Perhaps ironically, perhaps tellingly – I started to write this comment quite some time but wanted to check a couple of articles first to be sure my memory of them was correct. 40 minutes later, two email messages to report problems with DOI and SFX resolutions, and an error message in a content system when trying to email myself a PDF … I’m back!

Anyway … I’d suggest eduroam be part of thinking on this as well. FWIW, I am particularly struck by the proposals outlined here for student records transfer among institutions: http://www.echallenges.org/e2009/outbox/eChallenges_ref_75_doc_5803.pdf. I would note that this proposed approach to grade, course, and health records includes a concept of user control that is somewhat aligned with your idea of users controlling their data in http://scholarlykitchen.sspnet.org/2015/02/05/data-for-discovery/

Thanks Lisa, this is very interesting. Eduroam does seem like it might be a firmer foundation on which to build than some of the other ideas I mentioned in the piece.

It is so discouraging to hear again and again about the amount of time and productivity that goes to waste from these problems!

Researchers won’t choose open access over difficult to access content if the difficult to access content is valuable; they will just be rightly irritated.

In our increasingly “green” environment, sufficiently complete versions of many if not most new articles can be found in free versions via a simple Google search.

Dr. Schonfeld: Thank you for your post. I often have similar frustrations from the library behind-the- scenes perspective of managing online coverage and access; having to touch a handful of different interfaces to add or edit metadata for titles isn’t terribly efficient. If I were to compare that to say a manufacturing assembly line system, there would most likely be no comparison. The vendor response would most likely be to buy our “seamlessly integrated system” and it will take care of that, but of course, the large knowledge base players, e.g., EBSCOhost, ProQuest, and OCLC don’t consistently cooperate with each other. Negotiations are required and in the meantime, libraries and end users are left to wait for more comprehensive metadata. Why can’t knowledge bases be non-proprietary? Thanks again, and I look forward to your response.

Roger draws attention to an important issue – and it’s one that proves particularly challenging for users in developing countries where INASP works. Many universities and research institutes – via their respective national consortium – now have substantial journal (and increasingly e-book) content available. Over 1900 institutions are registered for access via INASP to collections which at the most extensive encompass 50,000 journals and 20,000 e-books. This builds on the extensive access institutions have via Research4Life (45,000 titles) and other schemes.

But the complexities of authentication – particularly when the software and systems needed to do this are relatively expensive or require good technical skills to install and manage them – mean that e-resources aren’t used nearly as much as they could be and this is a real challenge. Our training programme includes courses for IT systems librarians and e-resource managers (http://inasp.info/en/training-resources/), and a number of partners have been able to set-up proxy systems and discovery services to tackle these ‘breaks’ in the access chain. We’re also working with NRENs in a number of countries to tackle the underlying network architecture on university campuses (http://blog.inasp.info/ict-infrastructure-education-people-technology/).

David comments “Researchers won’t choose open access over difficult to access content if the difficult to access content is valuable; they will just be rightly irritated.” Unfortunately this isn’t true in developing countries – many will assume that the content is simply inaccessible to them and only the most determined will embark on the longer process of finding an accessible copy, talking to their librarians, or seeking the help of colleagues abroad. Open access doesn’t just solve a subscription cost problem for many universities – it removes a huge number of access/authentication hurdles too. It’s important that the conversations on authentication and access are extended to research and library communities in Africa, Asia and Latin America – access barriers are another obstacle to greater developing country participation in global research, participation that is really needed.

Jon, I agree that access is a bigger stumbling block in the developing world; particularly so for streaming video, data sets, etc. That said, the point I mean to make is that great scholarship has always overcome the challenges of access. The very best scholars ferret out the very best resources. It is too simplistic to suggest scholars will simply choose a lesser resource because it is open access. And, if they do, they will have a lesser outcome with the associated lesser career return, etc.

Hi David, I agree – scholarship does find a way to overcome the challenges in its way. But I’m also remembering lots of conversations over the years with African researchers who tried to access resources, failed to, and this failure reinforced their perception or ‘knowledge’ that these things weren’t accessible to them anyway, because they never had been. ‘We don’t have access to online journals, they’re too expensive. I tried and it asked me to pay $25 to access the paper’ is such a common refrain amongst African researchers that I talk to – or their collaborators here in the UK speaking on their behalf – that I can only conclude that a lot of researcher is weaker than it should be, or doesn’t build on the latest knowledge and methods as it could do, in part because access barriers are getting in the way. I don’t think it’s a case of consciously or deliberately choosing a lesser resource because that’s the one you can access (OA or otherwise) but because sometimes you just work with what you have. And I include students in this understanding of scholarship – and they are certainly less likely to pursue harder to access resources if they don’t understand the relative value, quality or relevance of different resources. Some great scholarship may overcome the barriers. But we risk losing a lot of potentially good scholarship along the way!

David, it may be true that great scholarship always finds its readers, but access barriers absolutely lead researcher to choose different sources than what they are seeking. In Ithaka S+R surveys of academics in the US, UK, and beyond, we find consistently that a notable share of respondents give up and look for something else when they experience challenges accessing an information resource through their library. In the US overall, the share that report doing so often or occasionally is about half of academics! Even more look for an open access version when a licensed version isn’t immediately available to them. See the discussion on pages 35-37 and especially figure 18 here: http://www.sr.ithaka.org/research-publications/us-faculty-survey-2012

Comments are closed.