Editor’s Note: Today’s post is by Gaby Appleton, Managing Director of Mendeley and Research Products at Elsevier. In this guest post, Gaby expands on some of the themes she discussed during the recent STM Association’s ‘STM Week’ panel debate on The future of access 1: a supercontinent for content?, and the work Elsevier is doing in these areas. The panel was moderated by Scholarly Kitchen Chef Roger Schonfeld.
One of the highlights of my job is spending time with researchers, trying to understand how we can support them in doing groundbreaking research. Their enthusiasm is always infectious, and I am energized by the challenges they have set for themselves. When we talk, it doesn’t take long before the conversation turns to the increasing demands researchers face, from finding a new role, finding the right collaborators, discovering and accessing relevant content, organizing knowledge, to showcasing and evaluating their work. One thing that always surprises me is the many different tools researchers use to help them do all these tasks. I’m often asked: “Can I share information from your application with a colleague who uses a different tool?” It’s a highly relevant question in a world where research is increasingly international, multi-disciplinary, and subject to various mandates from funders on how to manage and share different types of outputs. It’s a big – but rewarding – challenge to those of us who build these tools.
Although there are many tools that are commonly used by almost all researchers (Google Scholar is one), the research ecosystem for managing information is surprisingly diverse, with companies, non-profits, universities, and researchers themselves creating new platforms and applications all the time. These tools help researchers manage the unprecedented amount of knowledge and data that is available to them today, but each has its limits – there is no one-size-fits-all answer. They are also fragmented, with different data models and limited interoperability. As an example, the scholarly journal access infrastructure lacks standardization and reflects the complexity of the university customer landscape it has served and evolved with over several decades. This means, we’re sometimes burdening researchers with a complex experience instead of speeding them up.
Collectively, we must build solutions that work seamlessly for researchers and help solve the problems they face. We need to break everything down, identify specific problems and then build solutions that address them and that work together. By doing this together, we have an opportunity to transform the research ecosystem.
I spoke about this recently, along with Jan Reichelt from Kopernio (Clarivate Analytics) and Rob McGrath from ReadCube (Digital Science). During the Q&A, the session moderator Roger Schonfeld asked us if we thought the industry was going to see the emergence of a “supercontinent” of scholarly publishing. I said I don’t see that; instead my wish is for a connected galaxy of knowledge that researchers can travel through at light speed. There are many stars in the galaxy, each representing an information tool or product, and we want to allow for new stars (innovative tools) to emerge, and for researchers to avoid getting sucked into the productivity black holes of breakdowns in access and poor quality or missing information. At Elsevier, we refer to this ecosystem of tools and data as “the information system supporting research.”
From conversations with researchers, managers of research institutions, and funders, four clear principles start to emerge about how this information system can better support the needs of the research community:
First, the information system supporting research must be source-neutral. It should not privilege content or data from any one publisher or source; so that researchers can be confident that they are getting an unbiased view. Our preprint server SSRN is a good example. Researchers can quickly share early stage research – preprints, working papers and data – on the platform prior to publication. It’s “publisher agnostic,” allowing researchers to rapidly showcase their ideas for free, regardless of where it may end up being published. Researchers are then able to easily search and discover this work on the platform, leading to the sharing of ideas and greater collaboration.
Second, components from different providers in the information system supporting research should work together. Interoperability between applications, tools and data sets will allow researchers to use whichever platform they choose while providing a seamless workflow experience. To give you an example: Mendeley Data enables researchers to search for others’ data in 35 open repositories, and also to share their own data in ways that are under their control. It is also integrated with other sharing platforms that researchers use, like DropBox. The work of ORCID is also a shining example of enabling interoperability. It provides a unique digital identifier for individual researchers, which enables transparent connections between researchers and their works and affiliations across disciplines, borders and time.
Third, transparency. If a researcher receives an automated recommendation of an article to read, researchers want to know how that recommendation was arrived at, and why it is relevant to them. If an application sends an email alert, the researcher should know why it was sent, and have the option to turn it off. For example, in Scopus, users have the option to sign up colleagues for specific search, author and citation alerts. However, users receive an alert to say they have been set up for this by a colleague and it gives them the opportunity to opt out.
Finally, we must put researchers in control. People should be able to set their own preferences and parameters, for example on what should be shared on their behalf, and what they prefer to keep private. No single technology can, or should, make decisions on behalf of the researcher. Elsevier’s privacy principles are a good example of this: When we design and develop products we adhere to these principles and constantly think about how to build in granular options for user choice and control. We must always take into account peoples’ concerns about privacy, that’s why our user privacy center enables individuals to easily access and update their personal information, review our policies and get further support should they need it.
During our Q&A with Roger, he asked for an example to help bring all this to life, and RA21 (Resource Access for the 21st Century) is an obvious example of taking steps towards improving the researcher experience of access to published content, through standardization and interoperability. This joint STM and NISO initiative has widespread participation from all key stakeholder groups, including publishers, librarians, campus IT departments, vendors and identity federation operators. A number of pilot projects have been completed and the group is currently finalizing its recommendation to the industry on how we collectively provide users with a simple, seamless, customizable and secure way to access scholarly information.
We need to work closely with the research community to address the challenges they currently face and to co-create the solutions of the future. I’m excited about making every aspect of the research lifecycle more connected, more transparent and more inclusive.
1 Thought on "Guest Post: Supporting a Connected Galaxy of Knowledge"
Why not have Watson and similar intelligence “partner” with the researcher where AI could automate search adding new sources as they appear. This goes back to the previous thread and the noted exchange in this column. Watson with a “database passport” can, as the old yellow page phrase says, do the “walking” rather than the researcher. As it is done, today, the researcher needs to learn each scheme and so will Watson. That should reduce countless resources, time and money, of the database providers meeting and discussing “joint protocols”. Of course, the current path leads to consolidation by the major players such as RELX.