We all know that there are many different ways that researchers contribute to progress beyond just the publication of their own results: peer review of all shapes and forms; serving on editorial boards and as editors; volunteering in their scholarly community or association; helping plan conferences; and participating in promotion, tenure, and hiring committees, to name but a few. We also know that many contributions to research are made by people who aren’t themselves researchers, such as librarians, data specialists, lab managers, and others. And yet, despite the best efforts of initiatives like DORA, the San Francisco Declaration on Research Assessment (now signed by over 2,000 organizations and more than 16,000 individuals), metrics that focus primarily on citations, primarily of journal articles, remain the tool of choice for many — probably most — organizations that evaluate research and researchers.

Much has been written about why this is a problem — see for example, these posts by fellow Chefs Phil Davis on the pros and cons of citation-based metrics and Karin Wulf on what citations mean for the scholars being evaluated. But these metrics continue to be used widely, at least in part, because they are so widely available. And because, until recently at least, there haven’t been any meaningful alternatives.

However, the last few years have seen the development of other ways to record and make publicly available other types of contribution to research, opening up the opportunity to use this information as an additional basis for evaluation. In this post, I’m focusing on two of these: CRediT, the Contributor Roles Taxonomy; and ORCID, in particular, their new(ish) service and membership affiliation section.

audience applauding

CRediT

I interviewed Amy Brand, one of the founders of CRediT, back in 2014. She explained the rationale for the initiative as follows:

I found myself wishing that there was a way for publishers to capture and display structured information about who contributed what to multi-authored works, instead of, or in addition to, the list of author names. Since I was very involved in the ORCID initiative at the time, and had also worked at Crossref for many years, it occurred to me that if it was possible to create a controlled vocabulary of contribution tags, then those tags could be included as additional metadata in association with the DOI and, ultimately, with an individual’s ORCID.

Several publishers and research institutions were involved in the development of the original taxonomy of 14 roles (Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, and Writing – review and editing); a core working group was facilitated by CASRAI (the Consortia Advancing Standards in Research Administration Information). Cell Press and PLOS were among the early adopters, and CRediT has now been implemented by over 30 publishers and publishing outlets, 10 platforms, and one university.

It has also recently found a new home, with my own organization, NISO (National Information Standards Organization), where we are working with a small working group, led by current CRediT co-chairs (Liz Allen – F1000; Simon Kerridge – University of Kent; Alison McGonagle O’Connell – O’Connell Consulting and DeltaThink) to formalize the original Contributor Roles Taxonomy as an ANSI/NISO standard. They expect to complete editorial changes to the taxonomy language and submit the draft for final approval by NISO voting members and ANSI by September. Once that work is complete, a NISO standing committee will be set up to look at both how to promote the existing taxonomy and also how to expand it to meet the needs of other disciplines and workflows. That’s really important, because there has been some concern in the wider community about the fact that the original taxonomy is so focused on the scientific journal publication workflow. At my former organization, ORCID, we decided not to implement CRediT for that reason; however, I’m delighted to say that because the taxonomy will be expanded, it is now officially on the ORCID roadmap! Which is a great segue to…

ORCID

The notions of contributorship and enabling recognition for it in every form are central to ORCID. The “C” in ORCID stands for Contributor, and their vision is “a world where all who participate in research, scholarship, and innovation are uniquely identified and connected to their contributions and affiliations across disciplines, borders, and time” (my emphases).

ORCID has always enabled connections to a range of “work types,” which were originally based on a CASRAI community-developed work type taxonomy. Unfortunately, this is no longer being supported but, prompted by community requests, ORCID last year added two new work types — for annotations and physical objects — although arguably others are still needed, especially to meet the needs of non-scientific disciplines, for example, in the arts and humanities.

Looking beyond work types, ORCID has offered organizations the option of recognizing individual peer review contributions since late 2015. This information can only be added to ORCID records by a member organization, and it can then be shared by the record holder, for example, in a grant or employment application. Over 2.5M peer reviews have now been added to around 360,000 records, most by Publons, though publishers are increasingly starting to add this information themselves.

Excitingly, in late 2018 ORCID also added several new affiliation types, including one for Membership and Service. Covering membership in an organization, or donation of time or other resources in the service of an organization, this affiliation is intended to help ORCID users record and (if they wish) share their participation in volunteer activities. Information about these activities can be added by the user and/or by an ORCID member organization (the provenance is clear both in the public record and the ORCID API). To date, around 450,000 membership and service affiliations have been added to ORCID records — a whopping 99.9% by the record-holders themselves. It’s disappointing that so few members are currently using this functionality, as it’s a great way to both recognize your volunteers — journal editors, peer review service over time, serving on a conference program committee, and more — and to make it easy for them to share that information with other individuals and organizations.

A Call To Action!

In hopes that readers of The Scholarly Kitchen are as enthusiastic about the possibility of expanding the ways that we can recognize research contributions, I’d like to invite you and/or your organization to get involved in one or more of the following ways:

  • Adopt and/or implement the Contributor Roles Taxonomy at your organization. Whether you’re a publisher, a research institution, a funder, or a vendor, if you’re interested in different forms of contributions, this is a great starting point. I’d be remiss if I didn’t also note that there are already some other contributor roles taxonomies out there, including the CD2H Contributor Attribution Model and the FORCE 11 Contributor Role Ontology (built on the CRediT taxonomy). Once the existing taxonomy has been formalized as a standard, one of the CRediT team’s next goals is to work with the community to support the creation of a best of breed taxonomy, one that will work for as broad a range of disciplines and workflows as possible. To keep updated on progress — and get involved, including sharing your feedback — check the CRediT blog and follow @contrib_roles.
  • Add information about all forms of contribution to your researchers’ ORCID records. Yes, they can add that information themselves but, if you’re an ORCID member, why not do it for them? You’re saving them time and reducing the risk of errors, as well as recognizing — and validating — their service to your organization.
  • Ingest information about all forms of contribution from ORCID records. Whether or not you’re a member, you can use ORCID’s public API to pull publicly available information from records and use it to help recognize (or evaluate) your researchers’ contributions.
  • Sign the Declaration on Research Assessment. If you haven’t already done so, it’s a worthwhile way to demonstrate your commitment to changing the system.

Please let us know of other initiatives that are seeking to expand the options for recognizing and evaluating research and research contributions!

 

Full disclosure: as noted in the post above, I am a former employee of ORCID and currently work for NISO which is actively working to make CRediT into a recognized standard.

Alice Meadows

Alice Meadows

I am a Co-Founder of the MoreBrains Cooperative, a scholarly communications consultancy with a focus on open research and research infrastructure. I have many years experience of both scholarly publishing (including at Blackwell Publishing and Wiley) and research infrastructure (at ORCID and, most recently, NISO, where I was Director of Community Engagement). I’m actively involved in the information community, and served as SSP President in 2021-22. I was honored to receive the SSP Distinguished Service Award in 2018, the ALPSP Award for Contribution to Scholarly Publishing in 2016, and the ISMTE Recognition Award in 2013. I’m passionate about improving trust in scholarly communications, and about addressing inequities in our community (and beyond!). Note: The opinions expressed here are my own

Discussion

14 Thoughts on "Beyond Publication — Increasing Opportunities For Recognizing All Research Contributions"

Thank you, Alice, for this action-oriented post about these important steps toward clarifying recognition for all contributors to research. Excellent news that CRediT is on the ORCID roadmap!

Yes, “researchers contribute to progress beyond just … publication.” And as for letting you know about “other initiatives,” there is contribution to blogs such as The Scholarly Kitchen (when not countermanded by stern editorial overview). Indeed, circa 1993 with the support of various publishers, I initiated Bionet Journals Note, which was, in some ways, a precursor to SK. Whether researchers really want to be credited for such initiatives is another question. We certainly like to see our work cited when appropriate. That satisfies many of us.

Thank you for the article Alice.

Reinventing the researcher recognition system is likely the most effective long-term approach to improving research outcomes.

So, while the purpose of research recognition is partly to recognize individual contributions it also helps research funders and academic institutions assess contributions and process rigor, and thereby optimize future investment.

One of the things we do at Rescognito is to enable retrospective CRediT association with publications. Just add the (Crossref or DataCite) DOI to the https://rescognito.com URL. For example, for this this randomly picked preprint DOI (10.1101/2020.06.30.180448), the CRediT recognition page can be accessed here https://rescognito.com/10.1101/2020.06.30.180448

Once CRediT is collected (either from individuals or from a publisher) it can be visualized in innovative ways; for the manuscript (e.g. https://rescognito.com/v/10.1021/acscatal.0c01257 ) or for an entire publisher (e.g. PLOS: https://rescognito.com/institutionReasonsVisualization.php?iav=03bdvdc06 )

Richard Wynne – Rescognito

I think these sorts of contributions are vital, and am very happy that we are working out ways to track them. But I still struggle with the concept of “recognition” or “credit” for these actions. Who are we asking to recognize the value of these actions, and what sort of credit are we asking them to grant? Should someone receive a research grant because they peer review a lot of papers by other researchers? Should this qualify them for a research position? I tend to see a lot of these activities more as check boxes a researcher must complete to provide evidence that they are an active, contributing, and valued member of the community, rather than activities that are going to be carefully reviewed and rated. I wrote about this back in 2015 and have yet to see any convincing answers for the questions posed:
https://scholarlykitchen.sspnet.org/2015/06/17/the-problems-with-credit-for-peer-review/

I would reply to that by asking the question the other way around, David – why should someone get a grant or a research position “just” on the basis of publishing a paper?? Especially if it’s not clear what they contributed to that paper? If, for example, peer review is an essential part of the research endeavor, then why isn’t it equally important to use that as another way to evaluate someone’s research contributions? Though I absolutely agree that we can and should do more to distinguish between a cursory three-line review and a thoughtful three-page critique…

In my experience, at least in the sciences, most research-intensive institutions are looking to hire people who can bring in grants. Grants are usually awarded to people who have compelling research plans/ideas, and people who have a productive track record. I think CRediT can be really helpful for the latter, although I think there may be some researcher resistance to having precisely what they’ve done spelled out for potential employers, as there’s a good amount of CV padding that goes on. If it was clear that you only did one small statistical analysis or helped in the writing/editing process and not the conceptualization and performance of the research, that big paper on your CV won’t look as impressive.

But to answer your question, “why should someone get a grant or a research position “just” on the basis of publishing a paper,” the answer is that the report of the research results provides evidence of the researcher’s ability to do the primary thing they’re being hired for, research. Peer review is important, but it’s not the primary role of the researcher. There are jobs where review is the primary role, such as being a journal editor, but that’s not what a university is hiring a professor to do with the majority of their time, nor what a funder is asking the researcher to do with the majority of their efforts in return for the funds received. If you’re a foundation aimed at curing cancer, aren’t you going to want to put your money toward someone who will do experiments that may result in that cure, or are you better off putting money toward people who spend their time evaluating the work of others and not producing original results themselves?

Being asked to perform peer review reflects respect and standing in the community and so can be an important signaling mechanism. It is also essential for the research process, but it is not the primary role of the researcher. Hence my previous recommendations that researchers be required to perform at least some level of community service in this manner, but checking off that requirement is probably the limit of how deep these systems need to go.

Being an eloquent and incisive peer reviewer will make you a favorite of editors, but unless you also accompany it with original research ideas and execution of research plans, you’re not going to get a job or a grant.

Thanks David. I think/hope we are violently agreeing with each other! I am absolutely not saying that researchers should not be evaluated on the basis of their actual research and the publications arising from it, just that their other research contributions should also be taken into account. That’s been difficult historically because there was no easy way to track and share those other contributions, but tools like CRediT and ORCID (and the other suggestions in the comments – thanks all, and please keep them coming!) are increasingly making it easier to do so.

Yes, I think we’re on the same page — the research is what matters the most, but the other parts of the job matter too. They’re worth tracking, and still need to answer questions about who they matter to, and what sort of reward should be offered for performance in these areas (if any).

I’m going to (like Alice) strongly disagree with this assessment, particularly when it comes to those working within the academy. There the concept of the “three-legged stool” — research, teaching, and service — is the standard way of talking about the work that is expected of a faculty member. We have a good (if imperfect) way of understanding how to measure (and subsequently reward, via jobs, raises, promotions, prizes, grants, etc.) research. We have less good (and even more imperfect) ways to assess teaching effectiveness, but those do exist (e.g., student evaluations, peer observations) and also directly affect reward (again, via jobs, raises, promotions, prizes, etc.).

What we do not have at all is a mechanism to reward (in the same way) what is often properly described as the “invisible labor” that is that third leg of the stool — without which the entirely scholarly enterprise would collapse. Sure, academics can (and do) list their service activities on their CVs and annual reports. But all lines on a CV are not equal. Being “an eloquent and incisive peer reviewer,” to use your own example, takes time, effort, and thoughtfulness; being a less thorough and less thoughtful reviewer does not take nearly as much time and effort and consequently does not take away as much from what you are arguing should be the primary focus (i.e., research) — while not reviewing at all (or very rarely) really means that that individual can focus all their efforts on research (and, sure, teaching, if they must), hoping someone else will pick up the slack for them. There are numerous concerns that have been raised about an increasing tendency to do exactly what you suggest, being rewarded almost solely for research but not at all for service. (See, for example, Dean and Forray’s “The Long Goodbye: Can Academic Citizenship Sustain Academic Scholarship?” [https://journals.sagepub.com/doi/10.1177/1056492617726480].) That the current system of more and more (rewarded) publications but fewer and fewer (unrewarded) quality reviews is unsustainable is, I think, a pretty solid argument for rethinking what is rewarded.

Adding to these concerns is that it is women who tend to be the ones doing what Macfarlane and Burg have called the “academic housework” (see https://doi.org/10.1080/1360080X.2019.1589682) — not only reviewing, but mentoring, advising, committee work, and a range of other also unrewarded but necessary activities — that have a direct and negative correlation when it comes to promotions and salaries. Why do they engage in this work, then? Macfarlane and Burg observe that many female professors believe (thankfully!) that academic citizenship is an important part of their work — in other words, they truly believe all legs of the stool matter. Rather than then saying women should value that work less, Macfarlane and Burg argue — and I wholeheartedly join them in this view — that the academy “should take a more holistic view of the contribution made by professors, rather than simply looking at how much research funding they have gained.”

This recognition of the need for a more holistic reward system has given rise to a number of initiatives working to do just that, as Alice points out in her post, and is among the goals of the HuMetricsHSS (https://humetricshss.org/) initiative (see additional comment below), as well as many other projects that are working to ensure that we measure and reward what we value, rather than value what we measure.

Hi Rebecca — to be clear, I’m not arguing that this is a good thing. I think the “businessification” of education, the idea that we should run our educational systems like a business has been a disaster and is responsible for an enormous number of problems. But that is often the context in which the researcher is seeking recognition. To quote a colleague at a major medical school, “All we care about is how much money in grants you brought in and where you published your papers (because that leads to grant money).”

One of the key questions I asked in the post I linked to in the first comment above (https://scholarlykitchen.sspnet.org/2015/06/17/the-problems-with-credit-for-peer-review/) is, “who cares?” Not as in being dismissive, but in the idea that if you’re tallying up these numbers, you need someone who cares about them otherwise it’s a waste of time. As I noted in other comments, yes, it’s good that we can start to quantify some of this stuff, but without the fundamental sorts of change you describe to create a more holistic view of what a researcher/professor/instructor should be doing, it’s not going to matter much.

One way that publishers add value is by making “assertions” about research activity. For example: “this research is novel”, “this publication was peer reviewed”, “this research was funded by y”, “this research is based on these data”, “these findings are statistically sound”, “this research was undertaken rigorously”, etc.

The hope is that by making such “assertions” more transparent, open, attributable and granular, the readers of the future (human and machine) will be able to reach better conclusions about where society invests its limited research resources.

Richard Wynne – Rescognito

Thanks Richard, I think that’s a really interesting way of looking at these sorts of metrics/recognition models. Perhaps we shouldn’t think of them so much as benefiting the researcher (“credit” for peer review) but instead benefiting the reader (figuring out what’s important or valid).

Thanks so much for this excellent post, Alice.

As you already noted on Twitter this morning — and as we discussed yesterday — the HuMetricsHSS (Humane Metrics in Humanities and Social Science) initiative is among the groups working directly on shifting the conversation from valuing what we can easily measure to instead measuring (and subsequently rewarding) what we can value. I am a co-PI on that project. (Your very own chef Karin Wulf interviewed several of us early on in the initiative’s life and her thoughts can be found here: https://scholarlykitchen.sspnet.org/2017/11/02/metrics-human-made-but-humane/.) HuMetricsHSS is working on creating and supporting values-enacted frameworks for understanding and evaluating all aspects of the scholarly life well-lived and for promoting the nurturing of these values in scholarly practice.

Among our past efforts have been conducting workshops focused on two research outputs that are often not seen within that light: syllabi (https://humetricshss.org/blog/examining-the-syllabus-as-scholarly-object-what-can-we-learn-about-values-from-this-teaching-tool/) and annotations (https://humetricshss.org/blog/third-workshop-annotations/). I’m really delighted to have learned from this post that ORCID recognizes annotations as a recognizable (and hence rewardable) output. Perhaps they might also add syllabi, which we on the HuMetricsHSS team view both as a scholarly work in and of itself (e.g., What can the selection of texts and assignments tell us about the state of a discipline?) and as an indicator of the impact of other scholarly works (e.g., If a work is included in a syllabus, what does that mean for the author of the work? Is it worth adding to a tenure and promotion narrative to reflect one’s influence upon other instructors or the discipline as a whole?).

Our current efforts are focused on designing workshops (https://humetricshss.org/your-work/workshop-kit/) that can be run by departments, academic colleges, teams, or any other group. In this time of COVID, we are assuming these workshops will be conducted primarily online for the time being. We are also building a web application to enable scholars, research administrators, and others to assess how they are already embedding values in their scholarly practices and how they can do so even more. Stay tuned for additional information about that part of our work as we develop the app over the coming months.

There are numerous other similar initiatives. You’ve already mentioned DORA. Let me highlight a few more — with the caveat that there are many, many excellent efforts happening in this space!

Imagining America is a consortium of “scholars, artists, designers, humanists, and organizers” whose aim is to “strengthen and promote public scholarship, cultural organizing, and campus change.” One of their efforts is their Assessing the Practices of Public Scholarship collective, which seeks to understand assessment as a “values-engaged” transformative process rather than as a bureaucratic management tool. (See https://imaginingamerica.org/what-we-do/collaborative-research/assessing-the-practices-of-publicly-engaged-scholars/.)

The research coming out of the Scholarly Communications Lab (https://www.scholcommlab.ca/), led by Juan Pablo Alperin and Stefanie Haustein, is invaluable in providing insights into how metrics and rewards operate within the current system and how that system might be shifted to better measure and better reward the work being done by scholars.

The International Network of Research Management Societies (INORMS) Research Evaluation Working Group, led by Lizzie Gadd, has been developing an approach to responsible research evaluation they call SCOPE: START with what you value, consider CONTEXT and weigh OPTIONS for measuring, PROBE deeply what you measure and why, and constantly EVALUATE your evaluation. You can learn more about that effort here: https://thebibliomagician.wordpress.com/2019/12/11/introducing-scope-aprocess-for-evaluating-responsibly/.

Those are just a few examples, but you get the idea. Hopefully others will respond as well with their own efforts! I’d love to hear from them too!

Comments are closed.