The theme of the recent ORCID-CASRAI conference, which took place in Barcelona on May 18-19, was Research Evaluation, with an emphasis on emerging practice in the humanities and social sciences. The result was an interesting combination of presentations, ranging from a fascinating look at digital research evaluation and emerging practices in research data management for the arts, humanities and social sciences by Muriel Swijghuisen Reigersberg of Goldsmiths College, University of London, to more practical sessions like the panel on standards and tools for capturing outcomes and impact. Not to mention the Codefest, which ran in parallel to the main conference, and resulted in nine very promising projects.
Summing up the conference at the end of day one, Liz Allen of the Wellcome Trust identified three recurring themes: challenges, connections, and conversations. Sticking with the letter C, I’d add a fourth – collaboration – which, for me, underpinned both the formal presentations and the informal discussions throughout the conference.
The challenges are, as Liz pointed out, many – although progress is certainly being made. As many speakers pointed out, research evaluation itself – and in particular measuring impact – is a controversial issue. Agreeing on which impacts should be measured, how, when, why, and by whom is a major challenge and, as several speakers noted, involving the research institutions and researchers being evaluated in the decision-making process from the outset is critical. Inevitably, many speakers also raised the thorny issue of metrics. In fact, the conference both began and ended with presentations on that topic. The opening keynote speaker, Sergio Benedetto (ANVUR), in my favorite quote of the meeting, asked: “Can we assess the beauty of the Mona Lisa by counting the number of visitors to the Louvre?” while, in his closing keynote, Paul Wouters (CWTS), one of the authors of the recently launched Leiden Manifesto for Research Metrics, identified four problems with current academic research – the funding system, career structure, publication system, and evaluation system. Wouters believes that more information is both the cause of, and the answer to, these problems and, in his vision of the future, researchers would actually look forward to being evaluated, rather than dreading it!
Connections, which Liz Allen equated with opportunities, are equally plentiful – unsurprisingly, since arguably a challenge is an opportunity waiting to happen! Many speakers gave examples of how ORCID and CASRAI are helping to create these opportunities. For example, Simon Coles of the University of Southampton told us that there are currently 27 separate identifiers for their researchers – ORCID will be the 28th, but he believes that it will ultimately eliminate the need for all the others. Aurelia Andrés Rodríguez of FECYT gave an update on CVN, Spain’s national CV system, which enables researchers to create standardized CVs linked to their ORCID iD, as well as databases like SCOPUS and Web of Science. In one example she cited, these inter-system connections resulted in a 31% decrease in the resources needed to evaluate the researchers’ work.
Liz Allen also drew attention to the benefits of cross-sector conversations, which were much in evidence. Over 150 people attended the meeting and it was great to see research funders talking to consortia administrators, third party service organizations talking to researchers, publishers talking to research administrators, and more. Just as important, I think, was the global nature of many of these conversations. The conference highlighted initiatives from around the world, from the adoption of ORCID iDs by the Catalan universities network (some of which have close to 100% uptake!) to how, in Saudi Arabia, KAUST is leveraging identifiers to measure impact through analysis of scholarly publications, invention disclosures, patents and applications, startups, and industry collaborations, to the Jisc-CASRAI and Jisc-ARMA ORCID Pilot Projects in the UK.
My additional fourth C – collaboration – is perhaps the most important, and examples abounded. While I knew even before I joined ORCID that we placed a strong emphasis on collaboration, the extent of our collaborations with other organizations as demonstrated at the conference still took me by surprise. A couple of standouts were:
- the recently announced CASRAI/F1000/ORCID peer review project, which came about as the result of a chance conversation between Laure Haak of ORCID and Rebecca Lawrence of F1000, both of whom had been thinking about ways to address the peer review ‘crisis’. CASRAI subsequently became involved and a community working group was set up, with members representing Autism Speaks, Denison University, Journal of Politics and Religion, Cambridge University Press, American Geophysical Union, ISMTE, Origin Editorial, Sideview, University of Split, and hypothes.is. We are now kicking off an early adopters group of organizations who are starting to implement ORCID into their peer review processes
- Project CRediT, a joint venture led by The Wellcome Trust and Digital Science, facilitated by CASRAI and NISO and supported by the Science Europe Scientific Committee for the Life, Environmental and Geo Sciences, whose working group includes representatives from a further 11 organizations. The project also involves collaboration between ORCID and the Mozilla Badge project, to define a practical application of the contributorship ontology
Whether you were a newbie like me (it was my first week in my new job as ORCID’s Director of Communications) or an ORCID/CASRAI veteran, like many who attended, there was something for everyone at this conference, as you’ll see from the program, which also includes links to the presenters’ slides on Slideshare where available.