Last week, I spent a couple of days, with about 80 others, Transforming Research. We may not have completely finished that particular job, but I’ll tell you what — the organizers did a good job of transforming research-oriented conferences. To give just a couple of examples:

meeting logo

  • For each session, the amount of time given over to audience questions pretty much equaled the amount of time given to speakers (for instance, 2 x speakers of about 25 minutes each, then 40-odd minutes of Q&A). If, like me, you’ve spent your life in conference sessions where there’s 5 minutes for questions and no-one seems to have any, you might think that would make for a mortifying, tumbleweedy eternity of the chair trying to find things to ask the speakers about. Actually, it worked brilliantly — when you know there’s masses of time, you don’t feel so shy about “wasting” time asking your question. Consequently there was really lively, wide-ranging discussion at the end of every talk, and I took away a lesson from that.
  • The sponsors each got their 10 minutes, but had to adhere to a theme: how does your service support the advancement of precision medicine? This wasn’t easy – you can see my own attempt here, and I wasn’t the only one who didn’t really know what precision medicine was prior to being asked to talk about it – but it made for a fresher set of demos and, again, is a smart idea that I’ll be suggesting for other conferences in future.
  • While we’re covering the “new”, I’ll share with you the neologisms (clever or detestable, depending on your tolerance for linguistic evolution) that I picked up:
    • Grimpact: the negative impacts of research, which often go either unrecognized or unacknowledged. Think self-driving cars putting people out of work, said Robert Frodeman (Professor of Philosophy, University of North Texas)
    • Anecdata: thanks Richard Naples for alerting me to this one, which Oxford Dictionaries define as “information or evidence that is based on personal experience or observation rather than systematic research or analysis”. And yes, I guess if it already has a definition in the OED, then it’s no longer a neologism — but bringing it into the context of research evaluation was the new step for me; Richard was making the point that “qualitative data can be as “off” as metrics“.
    • Gift citations: oh, all right, by this point you’re probably all rolling your eyes at my being so behind-the-curve, vocabulary-wise, but this was another new on one me (thanks Mike Taylor, tweeting as @TransformingRes) — meaning those kinds of citations that you put in because, well, you know, everyone in your field cites that paper, and you’ve definitely read it too, even if, ahem, on reflection, no, it didn’t specifically contribute to this specific paper, but, well, wouldn’t it be rude not to? (or perhaps, wouldn’t it affect my career development if I didn’t?)

The intent behind citations came up several times during the conference, and has been an area of study for Chaomei Chen, Professor of Information Science at Drexel University, but his talk on this occasion focused on uncertainty. He and his team have undertaken fascinating analysis looking at:

  • the range of ways in which we express uncertainty in reporting research (“postulate that ..”, “suggest …”, “may be…”, “X is unknown”, “unclear”, “debatable”, “inconsistent”, etc.)
  • the variations between fields (with chemistry at one end of the spectrum and psychology at the other — I’ll leave you to guess which way round, or to have fun exploring slide 39), and
  • the tendency for the uncertainty to disappear not necessarily because concrete evidence has settled things, but simply because the “hedging” terms disappear over time as a work is cited and re-cited.

“This is the beginning of a new research area”, said Chen, emphasizing the implications for research evaluation (is a work more valuable if it is more certain — or if it embraces uncertainty by tackling a new issue? etc).

One thing Chen didn’t touch on specifically in the talk, but that will undoubtedly be a feature of future research in this area, is whether gender differences are evident in the use of “uncertain” language. He did mention bias (“Most importantly, the uncertainty-centric perspective reminds us the missing information and potential biases we need to deal with”), and this topic surfaced on several other occasions during the conference.

  • George Santangelo, Director, Office of Portfolio Analysis at the National Institutes of Health, flashed up a helpful slide covering confirmation bias, content-based bias, affiliation bias, prestige bias, gender bias, and racial / ethnic bias. His point was that “expert opinion can be imperfect”.
  • Steve Fuller, Professor of Sociology at the University of Warwick, talked about path dependency and anchoring bias (see “gift citations”, above).

Both were timely reminders that — as Bob Frodeman went on to say — we need to “get beyond the idea that science is objective” and “metrics are not just numbers reflecting reality — they are a system of governance”. I was reminded of Sara Rouhi’s recent post here in the Kitchen, “when politics and metrics collide“, where she articulated this well: “the presumption of objectivity does the greatest disservice to those most affected by bias”.

The final theme that caught my attention was scarcity:

  • Scarcity drives the demand for inadequate metrics” (Altmetric’s Stacy Konkiel tweeting Bob Frodeman) — I think at that moment the focus was on the scarcity of funding leading to the need for such hyper-evaluation of research and its impact, which in turn has led to the crude application of metrics. (Or the application of crude metrics). But the comment might equally well refer to the scarcity of skills in research evaluation, and the scarcity of time / people to do it. “How,” tweeted Mike Taylor, “would feelings about metrics change if there was more abundance and less of a scarcity “just play the game” mentality?”
  • Stacy also reported that both scarcity and uncertainty had been prevalent themes in a recent HuMetricsHSS workshop (the HuMetricsHSS project, “rethinking humane indicators of excellence in the humanities and social sciences”, is worth checking out)
  • And finally: another message from Bob Frodeman which touches on scarcity but makes the point that the issue is more what we choose to focus our scarce resources on: it’s common to hear academics concerned that “so much time reporting impact means that I don’t have time to do my research”, but isn’t it right, argued Frodeman, that the focus should be on the impact, not on research for its own sake?

Props to the organizers for creating something that was as much a conversation as a conference. Looking forward to seeing what happens next.

Apologies to anyone whose talks, tweets etc I’ve misrepresented. Please do comment below if corrections are required. It was a pretty mind blowing few days.

Charlie Rapple

Charlie Rapple

Charlie Rapple is co-founder of Kudos, which showcases research to accelerate and broaden its reach and impact. She is also Vice Chair of UKSG and serves on the Editorial Board of UKSG Insights. @charlierapple.bsky.social, x.com./charlierapple and linkedin.com/in/charlierapple. In past lives, Charlie has been an electronic publisher at CatchWord, a marketer at Ingenta, a scholarly comms consultant at TBI Communications, and associate editor of Learned Publishing.

Discussion

12 Thoughts on "Transforming Research, in the Face of Uncertainty, Scarcity, and Bias"

Charlie, I love this post for many reasons. First, I always look forward to your round-up of meetings that I would never really be able to attend. You are sharing the knowledge that a relatively limited number of people had access to and that’s very much appreciated. Being able to weave in links to slides is very helpful.

Second, it sounds like the format for this meeting was different and successful. This is very helpful for those of us who sit on lots of program committees. The longer Q&A is key!

Lastly, the three themes you present–uncertainty, scarcity, and bias–are all important topics affecting the research community that scholarly publishers need to really think about.

Thanks for the interesting post. “Grimpact” is a useful term, and we’ll have many opportunities to use it in the years ahead, although hopefully we can come up with something less jokey sounding before it becomes too widespread.

If you’ll forgive a minor nitpick… Oxford Dictionaries (www.oxforddictionaries.com) is NOT the OED (www.oed.com). Oxford Dictionaries focuses on “current language and practical usage”, in contrast to the historical emphasis of the OED. As a result, it has a much lower barrier to entry when it comes to neologisms. This has become quite a bugbear, with the steady stream of zeitgeisty posts about how such and such slang is now in the OED, when it’s just not so.

Ahh! I do forgive and thank you for that, Matthew! I didn’t know that. Now I feel much more cutting edge in my neologism knowledge!

The scarcity theme is definitely interesting, but it may just be a feature of the system rather than a problem that can be tackled: unless resources of funding are effectively infinite, the community is expected to fill up with researchers exploiting those funds. The number of researchers only stabilises once the field becomes so crowded that a new entrant has only a small chance of obtaining funds. So, (unless one can keep the pool of researchers small by some other means) there will always be a demand for metrics to sort through a crowded field.

Yes, agreed. I think there’s a fun / interesting thought game to play in terms of “if scarcity weren’t an issue, how might things be different?” – as it might help us rethink metrics or other aspects of research management – but you’re right that it’s ultimately not a scenario that’s likely to happen!

Thanks very much for the summary, it’s extremely interesting! Grimpact is my new favorite word.

Maybe I’m just missing something obvious, but could you explain slide 39 of Chen’s presentation a bit more? I’m not clear on what’s being measured. Use of uncertainty terms?

Charlie, thanks very much for the great post. In addition to uncertainties hinted by ways how scientists soften their claims, we are particularly fascinated by uncertainties due to inconsistencies, controversies, and contradictions in science, i.e. all sorts of “troubles”. They are special because they are often the source of an unstable or unsettled epistemic state and they could become a tipping point of how we update our beliefs and perspectives tomorrow. We are digging deeper on this topic to see to how much we could learn what science is about from this angle. It’s great to have a chance to share our work at this event.

It was really an amazing two days & on behalf of the organizing committee, let me express our gratitude to Charlie for coming and capturing such an excellent summary.

I was particularly pleased with how the extended discussion worked. Other smaller events like FORCE (next week) and the Altmetrics Conference (just a few weeks back), have had similarly extended discussion periods in previous years and I think they work particularly well in niche areas where the people in the audience have as much expertise as the people in front of the audience.

A few things that stood out for me were:

The first session where Dick Klavans painted a picture of science and Kiarri Kershaw talked about what it was like “in the trenches”. Dick’s work on identifying where it’s raining grant funding just might be what’s needed to flip the mindset of a researcher from scarcity to abundance and ease the worries about adopting more open practices, sharing data, etc. On the other hand, Kiarri’s comment that the output of the whole research assessment process appears to her to be no different from a random number generator was shocking & revealing.

Chaomei’s uncertainty-centric networks gave me a whole new way of thinking about the information content of data, too. If this sounds interesting to you, he has a book on the subject, “The Fitness of Information”.

It was interesting to see how little the word “impact” was actually used & how self-consciously. It’s almost like people are realizing that impact is subjective!

Santangelo’s set of tools (iTrans, in particular) were neat, though I got the impression their work is fairly insular, based on the discussion with the bibliometricians in the room, who seemed to feel that he was not familiar with their work.

The back-and-forth between Frodeman & friends and the funders on the life science was eye-opening. They started off totally talking past one another on impact (changing minds vs. curing patients) but they got closer. I say Frodeman & friends because it was apparent that all three (including Holbrook & Fuller) knew each other well & played off each other, which led to energetic discussion. The energy was strong enough (three opinionated guys – only session like that) that people commented about it afterwards. Apparently philosophy conferences are all like that!

Kristi Holmes, who we were lucky to have on the organizing committee, put together a cracking set of talks the second morning which, although breaking from the extended discussion pattern a bit, presented a great series of examples & use cases from research institutions. I regret that I missed most of this due to pressing matters, but thankfully Elsevier arranged to stream & record the conference. Not only did this enable participation from people who couldn’t be physically present (and give me a lot more respect for the conferences that do this well), but it will allow the event itself to have broader impact 😉

Just a brief note, Charlie, but I was running the @TransformingRes account for those two quotes you mentioned – gift citations & abundance vs scarcity affecting the “play the game” mentality. However, it’s entirely possible that I originally got them from him!

Oh! Sorry, I thought it was Mike on the Twitter account throughout! Thanks for correcting that! And thanks for adding to the summary.

Comments are closed.