Last week, I spent a couple of days, with about 80 others, Transforming Research. We may not have completely finished that particular job, but I’ll tell you what — the organizers did a good job of transforming research-oriented conferences. To give just a couple of examples:
- For each session, the amount of time given over to audience questions pretty much equaled the amount of time given to speakers (for instance, 2 x speakers of about 25 minutes each, then 40-odd minutes of Q&A). If, like me, you’ve spent your life in conference sessions where there’s 5 minutes for questions and no-one seems to have any, you might think that would make for a mortifying, tumbleweedy eternity of the chair trying to find things to ask the speakers about. Actually, it worked brilliantly — when you know there’s masses of time, you don’t feel so shy about “wasting” time asking your question. Consequently there was really lively, wide-ranging discussion at the end of every talk, and I took away a lesson from that.
- The sponsors each got their 10 minutes, but had to adhere to a theme: how does your service support the advancement of precision medicine? This wasn’t easy – you can see my own attempt here, and I wasn’t the only one who didn’t really know what precision medicine was prior to being asked to talk about it – but it made for a fresher set of demos and, again, is a smart idea that I’ll be suggesting for other conferences in future.
- While we’re covering the “new”, I’ll share with you the neologisms (clever or detestable, depending on your tolerance for linguistic evolution) that I picked up:
- Grimpact: the negative impacts of research, which often go either unrecognized or unacknowledged. Think self-driving cars putting people out of work, said Robert Frodeman (Professor of Philosophy, University of North Texas)
- Anecdata: thanks Richard Naples for alerting me to this one, which Oxford Dictionaries define as “information or evidence that is based on personal experience or observation rather than systematic research or analysis”. And yes, I guess if it already has a definition in the OED, then it’s no longer a neologism — but bringing it into the context of research evaluation was the new step for me; Richard was making the point that “qualitative data can be as “off” as metrics“.
- Gift citations: oh, all right, by this point you’re probably all rolling your eyes at my being so behind-the-curve, vocabulary-wise, but this was another new on one me (thanks Mike Taylor, tweeting as @TransformingRes) — meaning those kinds of citations that you put in because, well, you know, everyone in your field cites that paper, and you’ve definitely read it too, even if, ahem, on reflection, no, it didn’t specifically contribute to this specific paper, but, well, wouldn’t it be rude not to? (or perhaps, wouldn’t it affect my career development if I didn’t?)
The intent behind citations came up several times during the conference, and has been an area of study for Chaomei Chen, Professor of Information Science at Drexel University, but his talk on this occasion focused on uncertainty. He and his team have undertaken fascinating analysis looking at:
- the range of ways in which we express uncertainty in reporting research (“postulate that ..”, “suggest …”, “may be…”, “X is unknown”, “unclear”, “debatable”, “inconsistent”, etc.)
- the variations between fields (with chemistry at one end of the spectrum and psychology at the other — I’ll leave you to guess which way round, or to have fun exploring slide 39), and
- the tendency for the uncertainty to disappear not necessarily because concrete evidence has settled things, but simply because the “hedging” terms disappear over time as a work is cited and re-cited.
“This is the beginning of a new research area”, said Chen, emphasizing the implications for research evaluation (is a work more valuable if it is more certain — or if it embraces uncertainty by tackling a new issue? etc).
One thing Chen didn’t touch on specifically in the talk, but that will undoubtedly be a feature of future research in this area, is whether gender differences are evident in the use of “uncertain” language. He did mention bias (“Most importantly, the uncertainty-centric perspective reminds us the missing information and potential biases we need to deal with”), and this topic surfaced on several other occasions during the conference.
- George Santangelo, Director, Office of Portfolio Analysis at the National Institutes of Health, flashed up a helpful slide covering confirmation bias, content-based bias, affiliation bias, prestige bias, gender bias, and racial / ethnic bias. His point was that “expert opinion can be imperfect”.
- Steve Fuller, Professor of Sociology at the University of Warwick, talked about path dependency and anchoring bias (see “gift citations”, above).
Both were timely reminders that — as Bob Frodeman went on to say — we need to “get beyond the idea that science is objective” and “metrics are not just numbers reflecting reality — they are a system of governance”. I was reminded of Sara Rouhi’s recent post here in the Kitchen, “when politics and metrics collide“, where she articulated this well: “the presumption of objectivity does the greatest disservice to those most affected by bias”.
The final theme that caught my attention was scarcity:
- “Scarcity drives the demand for inadequate metrics” (Altmetric’s Stacy Konkiel tweeting Bob Frodeman) — I think at that moment the focus was on the scarcity of funding leading to the need for such hyper-evaluation of research and its impact, which in turn has led to the crude application of metrics. (Or the application of crude metrics). But the comment might equally well refer to the scarcity of skills in research evaluation, and the scarcity of time / people to do it. “How,” tweeted Mike Taylor, “would feelings about metrics change if there was more abundance and less of a scarcity “just play the game” mentality?”
- Stacy also reported that both scarcity and uncertainty had been prevalent themes in a recent HuMetricsHSS workshop (the HuMetricsHSS project, “rethinking humane indicators of excellence in the humanities and social sciences”, is worth checking out)
- And finally: another message from Bob Frodeman which touches on scarcity but makes the point that the issue is more what we choose to focus our scarce resources on: it’s common to hear academics concerned that “so much time reporting impact means that I don’t have time to do my research”, but isn’t it right, argued Frodeman, that the focus should be on the impact, not on research for its own sake?
Props to the organizers for creating something that was as much a conversation as a conference. Looking forward to seeing what happens next.
Apologies to anyone whose talks, tweets etc I’ve misrepresented. Please do comment below if corrections are required. It was a pretty mind blowing few days.