0d38baaHaving worked in scholarly publishing my entire career I’ve attended plenty of industry conferences over the years but, since joining ORCID, I’m going to a whole lot more that were outside the scope of my previous role and/or that I’d never even heard of. The recent VIVO conference in Cambridge, MA falls firmly into the latter category.

For those of you who, like me, aren’t familiar with VIVO, it’s a relatively new organization, formed following a successful NIH-sponsored project that ran from 2009-2012. Essentially it’s an open source researcher profile system that enables the discovery of research and scholarship across disciplines and institutions. It integrates with a number of other systems (HR, grants, faculty activity databases, and more) and has been adopted by a number of associations, funders, and universities in the US and beyond.

Researcher profile systems – both commercial (think ResearchGate or Thomson Reuters ResearcherID) and nonprofit (like VIVO) – are an increasingly important part of the scholarly communications infrastructure. But, other than worrying about the article sharing that some of them allow (or even encourage), they weren’t really on my radar when I was working in scholarly publishing. So VIVO was quite an eye-opener.

Very few publishers attended, but there was an abundance of representatives from other areas of scholarly communications – librarians, research administrators, technologists (lots of them!), vendors, and more. They’re a passionate and dedicated bunch, which made for an interesting meeting, even if some of the technology discussions were a little hard to follow for a non-techy type. (As a side note, I loved the fact that two of the sessions I attended were #allfemalepanels – a first for me at an industry conference.)

I certainly came away better informed about what VIVO and other organizations in that space are doing. And, because it gave me a view of a whole new aspect of scholarly communications, it got me thinking about the need for all of us who work in this field – funders, librarians, publishers, researchers, vendors, and others – to have a better understanding of the whole research infrastructure, not just the bit(s) that we are directly involved in or that intersect directly with us. It’s a topic I’ve been giving a lot of thought to recently, as have several others, so it’s one that I hope will start cropping up more at our industry conferences.

Why does it matter? There are several reasons:

  • First, ultimately we are all serving the same audience – researchers. And the better we all understand what they want and need, the better we will be able to continue to develop services to meet those needs. Doing so collaboratively and, where appropriate, collectively will enable us to provide them with a better service. CHORUS is a nice example of this. Founded by a group of publishers in response to the OSTP memo requiring federal agencies to make the results of their funded research publicly available, it’s a cooperative venture that builds on existing infrastructure to help researchers comply with these requirements, and funders monitor and report on the impact of the research they fund. Another – quite different – example, is Digital Science, a for-profit company which has done a great job of acquiring and developing a suite of services to serve researchers and their organizations at all stages of the research cycle. Rather than then restricting those services to researchers working with their parent company, Holtzbrinck, instead they are making them widely available, so that for example, Wiley authors can benefit from access to Altmetric, Taylor & Francis’ researchers can use ReadCube’s enhanced functionality, and so on.
  • Second, in the same way that understanding what those researchers want and need enables us to develop services to meet those needs, understanding what other players in the research infrastructure are doing enables us to develop those services in ways that are interoperable, ultimately saving all of us time and effort in future. Unique, persistent identifiers (PIDs) – for people, places, and things – are an important part of this, especially in the digital age, as they allow unambiguous linking between different systems. Publishers figured this out years ago – the ISBN dates back nearly 50 years and, in the last 15 years or so, many publishers have implemented services such as CrossRef (who mint the unique DOIs now widely used for research outputs), ORCID (1.6m+ researcher identifiers registered  for unique iDs and counting), Ringgold (400,00+ academic and professional institutions with unique identifiers globally). Librarians are also strong supporters of PIDs, and of standards in general. Arguably closer to researchers than any of us, they really understand the value of a strong, rigorous research infrastructure. Back to VIVO for a moment, and some of my favorite sessions and conversations focused on the need for better taxonomies. Call me nerdy – but yes, yes, yes!
  • A third reason for us to collectively develop a better understanding of the research infrastructure we are all part of is so that we can do a better job of explaining it to others. While I suspect that many of us aren’t as well-informed as we could be, I’d put money on the fact that we know more than most funders and researchers – the people who are paying for the research and actually doing it. For example, although most researchers I know are aware of DOIs, they have never heard of CrossRef. Many funders are equally ignorant, even of FundRef, a service which was created specifically to meet their need to monitor the research they fund. Both groups are much more likely to pay attention to our collective investment in the research infrastructure if we speak with one voice, something which I hope will be more of a priority in future.

In the meantime, viva VIVO – thanks for giving me lots of food for thought, and I hope to be back again in 2016!

Alice Meadows

Alice Meadows

I am a Co-Founder of the MoreBrains Cooperative, a scholarly communications consultancy with a focus on open research and research infrastructure. I have many years experience of both scholarly publishing (including at Blackwell Publishing and Wiley) and research infrastructure (at ORCID and, most recently, NISO, where I was Director of Community Engagement). I’m actively involved in the information community, and served as SSP President in 2021-22. I was honored to receive the SSP Distinguished Service Award in 2018, the ALPSP Award for Contribution to Scholarly Publishing in 2016, and the ISMTE Recognition Award in 2013. I’m passionate about improving trust in scholarly communications, and about addressing inequities in our community (and beyond!). Note: The opinions expressed here are my own


9 Thoughts on "Viva VIVO! Thinking More Broadly About the Scholarly Communications Infrastructure"

As has been noted, there are some 20K scholarly publications, across all areas and a resultant several million articles. Editors and reviewers as well as funders and consumers as to relevant content (as well as publishers). It’s not just categorizing and tagging of articles; it’s also determining what may be critical and what is persiflage that feeds a self-reinforcing publish/perish/funding network. The arrival of increasing clever, fast, lower in cost and increasingly ubiquitous semantic search engines aided by these tags and article persistence will be of benefit all. The question to be ultimately addressed is the survival of many of these publication once open access and intelligent search lays the materials out. Near term, not to worry.

Wholeheartedly agree it is worthwile to look at the research infrastructure as a whole, to understand and interpret what’s happening in scholarly communication at the moment and how to best support researchers in that. We have been doing this for a while now in our project ‘101 innovations in scholarly communication’ (https://101innovations.wordpress.com), both by looking at the supply side (mapping the landscape of tools currently available to researchers) and the demand side (what tools are researchers using in various phases of their workflow).

Regarding the former, we observe many publishers increasingly working towards providing an ecosystem of tools to support researchers throughout their workflow, and libraries broadening their focus in supporting researchers not only in discovery and publishing, but in the whole research cycle.

For the latter, we are currently running a worldwide survey among researchers (https://innoscholcomm.typeform.com/to/Csvr7b?source=S, > 4500 responses so far), asking them about the tool combinations they use for various research activities, as well as what they perceive to be the most important development(s) in scholarly communication in the coming years. We hope to spread the survey as widely as possible, and all results will be made openly available.

The need for better taxonomies issue is interesting, since I work in that area. Can you elaborate on what these taxonomies might be of, or for? In that regard the Federal STI folks have done some interesting work, under the rubric of knowledge organization systems or KOS. They might be worth talking to. Their informal organization is CENDI: http://www.cendi.gov/.

If folks are looking for taxonomies of the physical sciences, one powerful case is what I call Word Web, developed by DOE OSTI. It is actually a complex logical structure, but so is science. See the following:

Although I can’t speak for him, I’d imagine Stevan Harnad would be delighted with VIVO as it further enhances the infrastructure that promotes the sharing of Green OA articles through what he has promoted as the copy-request button from one researcher to another.

Alice, to your collection of IDs – here we assign IDs to scientific conferences: lod.springer.com

Thanks all for the additional links and info. Bianca I’d be very interested to hear more about your survey results when available!

Speaking of taxonomies, F1000 just published my latest: Wojick D and Michaels P. A Taxonomy to support the Statistical Study of Funding-induced Biases in Science [version 1; referees: awaiting peer review] F1000Research 2015, 4:886 (doi: 10.12688/f1000research.7094.1).

Comments are closed.