The theme of the recent ORCID-CASRAI conference, which took place in Barcelona on May 18-19, was Research Evaluation, with an emphasis on emerging practice in the humanities and social sciences. The result was an interesting combination of presentations, ranging from a fascinating look at digital research evaluation and emerging practices in research data management for the arts, humanities and social sciences by Muriel Swijghuisen Reigersberg of Goldsmiths College, University of London, to more practical sessions like the panel on standards and tools for capturing outcomes and impact. Not to mention the Codefest, which ran in parallel to the main conference, and resulted in nine very promising projects.
Summing up the conference at the end of day one, Liz Allen of the Wellcome Trust identified three recurring themes: challenges, connections, and conversations. Sticking with the letter C, I’d add a fourth – collaboration – which, for me, underpinned both the formal presentations and the informal discussions throughout the conference.
The challenges are, as Liz pointed out, many – although progress is certainly being made. As many speakers pointed out, research evaluation itself – and in particular measuring impact – is a controversial issue. Agreeing on which impacts should be measured, how, when, why, and by whom is a major challenge and, as several speakers noted, involving the research institutions and researchers being evaluated in the decision-making process from the outset is critical. Inevitably, many speakers also raised the thorny issue of metrics. In fact, the conference both began and ended with presentations on that topic. The opening keynote speaker, Sergio Benedetto (ANVUR), in my favorite quote of the meeting, asked: “Can we assess the beauty of the Mona Lisa by counting the number of visitors to the Louvre?” while, in his closing keynote, Paul Wouters (CWTS), one of the authors of the recently launched Leiden Manifesto for Research Metrics, identified four problems with current academic research – the funding system, career structure, publication system, and evaluation system. Wouters believes that more information is both the cause of, and the answer to, these problems and, in his vision of the future, researchers would actually look forward to being evaluated, rather than dreading it!
Connections, which Liz Allen equated with opportunities, are equally plentiful – unsurprisingly, since arguably a challenge is an opportunity waiting to happen! Many speakers gave examples of how ORCID and CASRAI are helping to create these opportunities. For example, Simon Coles of the University of Southampton told us that there are currently 27 separate identifiers for their researchers – ORCID will be the 28th, but he believes that it will ultimately eliminate the need for all the others. Aurelia Andrés Rodríguez of FECYT gave an update on CVN, Spain’s national CV system, which enables researchers to create standardized CVs linked to their ORCID iD, as well as databases like SCOPUS and Web of Science. In one example she cited, these inter-system connections resulted in a 31% decrease in the resources needed to evaluate the researchers’ work.
Liz Allen also drew attention to the benefits of cross-sector conversations, which were much in evidence. Over 150 people attended the meeting and it was great to see research funders talking to consortia administrators, third party service organizations talking to researchers, publishers talking to research administrators, and more. Just as important, I think, was the global nature of many of these conversations. The conference highlighted initiatives from around the world, from the adoption of ORCID iDs by the Catalan universities network (some of which have close to 100% uptake!) to how, in Saudi Arabia, KAUST is leveraging identifiers to measure impact through analysis of scholarly publications, invention disclosures, patents and applications, startups, and industry collaborations, to the Jisc-CASRAI and Jisc-ARMA ORCID Pilot Projects in the UK.
My additional fourth C – collaboration – is perhaps the most important, and examples abounded. While I knew even before I joined ORCID that we placed a strong emphasis on collaboration, the extent of our collaborations with other organizations as demonstrated at the conference still took me by surprise. A couple of standouts were:
- the recently announced CASRAI/F1000/ORCID peer review project, which came about as the result of a chance conversation between Laure Haak of ORCID and Rebecca Lawrence of F1000, both of whom had been thinking about ways to address the peer review ‘crisis’. CASRAI subsequently became involved and a community working group was set up, with members representing Autism Speaks, Denison University, Journal of Politics and Religion, Cambridge University Press, American Geophysical Union, ISMTE, Origin Editorial, Sideview, University of Split, and hypothes.is. We are now kicking off an early adopters group of organizations who are starting to implement ORCID into their peer review processes
- Project CRediT, a joint venture led by The Wellcome Trust and Digital Science, facilitated by CASRAI and NISO and supported by the Science Europe Scientific Committee for the Life, Environmental and Geo Sciences, whose working group includes representatives from a further 11 organizations. The project also involves collaboration between ORCID and the Mozilla Badge project, to define a practical application of the contributorship ontology
Whether you were a newbie like me (it was my first week in my new job as ORCID’s Director of Communications) or an ORCID/CASRAI veteran, like many who attended, there was something for everyone at this conference, as you’ll see from the program, which also includes links to the presenters’ slides on Slideshare where available.
Discussion
8 Thoughts on "Challenges, Connections, Conversations, and Collaboration – Lessons from the May 2015 ORCID-CASRAI Conference"
Given that the focus of this conference was on the HSS fields, it was not clear to me how much of what you cite here actually is relevant to HSS. E.g., does the Wellcome Institute fund any HSS research? How much do HSS scholars actually use ORCID?
Also, I’m curious about what was said, if anything, about how digital HSS isd being evaluated these days, both in the UK and US? Back when the ACLS Humanities EBook and Gutenberg-e projects were started in the early 2000s, one drawback was that the main journals in history insisted on having something in print form to review rather than having reviewers actually review the entire project in its full online form. How much has that problem been resolved since then?
Thanks Sandy – to clarify, the main focus of the conference was research evaluation, but with an emphasis on HSS. Most of the sessions I attended included at least some discussion of HSS, though there were often more questions than answers including, for example, whether HSS is inherently harder to measure than STM or if HSS evaluation is simply lagging behind. I found Muriel Swijghuisen Reigersberg’s comments about the challenges of applying identifiers to, for example, works or art, performances, choreography, etc especially interesting. In many cases there isn’t a solution yet – but the fact that this is being discussed seems like a good sign to me. As far as evaluating digital vs print humanities is concerned, the UK seems to be moving toward digital (see HEFCE’s report on monographs and OA: http://www.hefce.ac.uk/pubs/rereports/year/2015/monographs/). There’s certainly still a way to go!
I do not agree with Wouters claim that the funding system, career structure, publication system, and evaluation system are problems per se. I am all for innovation (my title used to be senior consultant for innovation) but bashing existing practices is a common innovator’s mistake. As for the Mona Lisa, I do not know about its beauty but the number of people who come to see it is certainly a measure of its great importance. Likewise for scholarship. Evaluation must be done so it has to work with what it has to work with.
I’m not sure I would agree that the number of people coming to see the Mona Lisa is an accurate measure of its “great importance” if by that you mean artistic merit. It would be an accurate measure of its popularity and high public profile, certainly, but since when does mass public opinion determine true value? Think about what TV shows or movies have mass appeal but are not necessarily great works of real art, like “Dumb and Dumber.” That is also true for assessment of scholarly value. Hits do not measure value, only interest.
My assumption is that people are going to see the Mona Lisa because of its artistic merit, making their numbers a measure of sorts. Are you suggesting that the Mona Lisa is not an important work of art?
There are of course many different kinds of importance. That is what we are trying to sort out with citations and altmetrics. What sorts of importance do they each measure?
Great crude comedy is also great art, by the way. “Dumb and Dumber” is very funny.
My guess is that most people go to see the Mona Lisa because they have been told it is a great work of art, not that they have enough knowledge of art to make an independent assessment themselves. It’s why we have art critics, music critics, movie critics, etc. In scholarly publishing books like “The Bell Curve” have sold well not because they are great works of scholarship but because they have become controversial. Do you think “Fifty Shades of Grey” sold well because it is a great work of fiction? Sales alone are seldom a true metric for quality.