I asked an academic colleague, well past tenure, if he would have preferred to think of his career trajectory in terms of more individualized milestones and horizons, rather than the standard, institutional tenure and promotion process. His answer surprised me. He’s a pretty straightforward fellow, a thoughtful and accomplished scholar, an inspired teacher, and a mainstay of service on our department’s many committees — the trifecta for academic evaluation. I thought he would report that the clarity and longstanding expectations for tenure would be more appealing; after all, this was a process he had mastered. But no. He reported that he would rather have been assessed on a quite different set of criteria, including intellectual milestones such as the acquisition of knowledge and the mastery of methods of dissemination that had been a priority for him from early on.
The basic notion that we can and should apply standardized criteria for intellectual achievement and contribution is central to much of contemporary education. In higher education, such standardization in the form of metrical evaluation is patently biased, as any number of studies have demonstrated. In scholarly communications, citation metrics that are often used as an indicator of quality scholarship, including in the academic tenure and promotion processes, are one of the most vexed subjects. There are technical, epistemological and philosophical issues to consider, all of them politically freighted. How to weight the importance of citation metrics, the evident bias in metrics of all kinds, and the applicability of these metrics for the humanities versus STEM fields are issues that have prompted numerous studies, reports, and reflections. Recently in the Kitchen, Angela Cochran surveyed citation and other metrics offering a metrical assessment of these metrical tools on a grain of salt scale — pinch, cup, bathtub, to classroom full. By Angela’s calculation, on average metric tools ought to be taken with 286 billion grains of salt. Sounds about right to me. An intensive review by the Higher Education Funding Council for England (HEFCE), an important body given the concerns about over-reliance on metrics in the Research Excellence Framework (REF), concluded in 2015 with a cautionary note. In another recent Kitchen post, Sara Rouhi pointed to the structures of power inherent in every aspect of evaluating scholarship.
But is there a way to fashion metrics differently? For academic humanists like my colleague, for whom citation metrics are not yet playing a major role in their professional profile and evaluations, there is still a discomfort with the way that metrics have begun to leak into all aspects of the intellectual economy. And there is a sense that there ought to be better — more humanistic — ways to assess their labors. An alternative is being workshopped. With a grant from the Andrew W. Mellon Foundation, HumetricsHSS is a kind of meta-workshop in “rethinking humane indicators of excellence in the humanities and social sciences.” A pilot phase is allowing HumetricsHSS to test a set of propositions exploring how, if an evaluative process could be rebuilt with humanities values, individuals and institutions might embrace a very different approach to assessing scholars and scholarship. Christopher Long, one of the HumetricsHSS leads and Dean of the College of Arts and Letters at Michigan State University observed in a post on the LSE Impact blog yesterday,
“If the metrics we use are to be capable of empowering innovative research along diverse pathways of intellectual development, they must be rooted in values that enrich the scholarly endeavor and practices that enhance the work produced.”
In addition to Long, the HumanitiesHSS core team is a group with primary professional affiliations in libraries and scholarly society programs but with long experience in higher education. I spoke with three of the six, Nicky Agate (Head of Digital Initiatives at the Modern Language Association), Rebecca Kennison (Executive Director and Principal of K/N Consultants and a co-founder of the Open Access Network), and Stacy Konkiel (Director of Research and Education for Altmetric), to ask them more about the premises behind HumetricsHSS, how the project is developing, and what it might ultimately offer for individuals and institutions. The enthusiasm and commitment of the team is clear, and infectious.
Nicky Agate made clear that HumetricsHSS is not a product or a service, but at this point an initiative to bring interdisciplinary HSS groups together around the potential for different standards of evaluation. The pilot phase is “exploratory not programmatic.” The HumetricsHSS workshops are meant to be iterative, allowing evolution within each workshop group and then among the workshops. A key insight for HumetricsHSS is that given the evidence suggesting that metrics can drive negative behaviors, as Stacy Konkiel puts it, could “certain measures, if well chosen and incentivized… actually drive positive behaviors?” But what, exactly, will be measured?
With the ever wider array of measurable outputs, one answer might be to use more subtle measures of different kinds of outputs (more like Altmetrics than the JIF, for example). But Rebecca Kennison described, as a result of the first HumetricsHSS brainstorming, how “We wanted to think about not just what we wanted to measure, but what we wanted to inspire.” The resulting “Humanities Values” graphic, now prominent in HumetricsHSS materials, is a bit misleading in that this represents only an early product of that desire to inspire. But it is illustrative. The team articulated 5 core values: equity, openness, collegiality, quality, and community, with some specification and elaboration of each (inclusivity, public good, social justice, equitable access, accessibility, for example, within the general value of equity). Based on these values, presumably, one could design and employ a set of those more subtle and appropriate (hu)metrics.
The initial step then, has been identifying the values as a foundation on which to build. In their first workshop, when new voices were brought in to the discussion, the HumetricsHSS team found that there was actually little agreement on the core values they had proposed. Held last month at Michigan State University, “The Value of Values” was a 3 day event with two dozen humanities and social science scholars and administrators to explore the initial list of 5 values. Pre-reading included the Palgrave Communications article from January 2017, “Excellence R Us”: university research and the fetishisation of excellence,” which argues that the rhetoric and infrastructure around “excellence” narrows intellectual inquiry and can reinforce existing hierarchies of all sorts. Workshop participants were asked to think about values both for individual scholars and for institutions. Adriel Trott was a participant in that workshop, and blogged about the experience. She included some images of the workshop sticky note and poster paper process of sharing ideas — if not coming to consensus — around values, with an illustrative profusion of pink paper, lines, arrows and circles suggesting the intensity of the exchanges. After the three days, she wrote, it seemed time to start thinking about how to scale and transfer the idealism encouraged (productively so) in the workshop.
This is part of the longer term plan. Nicky Agate told me that HumetricsHSS could ultimately offer “a flexible framework that institutions, departments and individuals could use.” Long describes the initiative as looking to expand the types of scholarly contributions that are rewarded in the academy (reviewing and mentoring, for example) to present, using words I heard from his teammates, too, “a more textured story” about a scholar’s work. HumetricsHSS has scheduled presentations and shorter workshops within other conferences and meetings, and further workshops, through this pilot phase.
But is there really an end in these means? Some scholarly societies have already embraced a broader range of scholarly activities and are encouraging academic departments to do the same; some universities and academic departments have long been more expansive in the ways they appreciate a variety of scholarly contributions, along with teaching and service. Surely much more is needed in this direction, but I’m not sure that will be the primary value of HumetricsHSS. In the final analysis, HumetricsHSS isn’t necessarily about metrics, or the Humanities, and it’s clearly not only applicable to the academy. The value of the project may be in creating — even forcing — time and space for a more intentional, less reactive approach to professional development and to appreciating a rich diversity of scholarly contributions. If there is unlikely to be unanimity, or even enough consensus, about “core values” to make measurement practical beyond the individual level, the process of debating the values inherent in scholarship seems important in itself for broader scholarly communities.