Retracted research — published work that is withdrawn, removed, or otherwise invalidated from the scientific and scholarly record —is relatively rare. However, when it is inadvertently propagated within the digital scholarly record through citations, its impact can be significant, and not in a good way. Look no further than Andrew Wakefield’s notorious 1998 article, which falsely claimed that there is a link between the MMR vaccine and autism. Although subsequently retracted, it continues to be extensively cited and quoted. A number of organizations, including Retraction Watch, have worked to highlight and address this problem, and they’ve recently been joined by the Reducing the Inadvertent Spread of Retracted Science (RISRS) project, led by Jodi Schneider, Assistant Professor of Information Sciences at the University of Illinois Urbana-Champaign. In this interview, she tells us more about this work and how she hopes it will help. (Full disclosure, I participated in the RISRS project, including being interviewed and attending the workshops.)

Deleting unwated files to recycle bin illustrated in blackboard

Please can you tell us a bit about yourself — what is your current role and how did you get there?

I’m a faculty member, teaching future librarians and future data scientists. It’s been a winding path: before getting into this field I worked in insurance math, as a bookstore gift buyer, a science library staffer, a web librarian, and as a community manager for a wiki. Eventually I found my way into informatics, which led me to my current role.

What prompted your interest in retractions?

I have a long-standing interest in scholarly communication, especially how people make valid scientific arguments. I got interested in retractions as a way to think about what happens when we CAN’T rely on the results of a research paper. In particular, how does that impact later work that builds on retracted science?

The Sloan Foundation clearly agreed, as they funded RISRS. Can you tell us more about that project — who was involved, the main goals, etc?

The goal of the RISRS project is to figure out how to reduce the inadvertent citation and reuse of retracted science. Citation and use of retracted papers is common — 30 years of research, since 1989, has found this. Inadvertent use of retracted papers is problematic, particularly in clinical medicine. The main approach in RISRS has been an environment scan and stakeholder consultation, supplemented with a citation analysis.

We set out to investigate four research questions:

  1. What is the actual harm associated with retracted research?
  2. What are the intervention points for stopping the spread of retraction? Which gatekeepers can intervene and/or disseminate retraction status?
  3. What are the classes of retracted papers? (What classes of retracted papers can be considered citable, and in what context?)
  4. What are the impediments to open access dissemination of retraction statuses and retraction notices?

Overall about 70 people were involved: we had a three-part workshop with attendees from across scholarly communication as well as interviewees. I was lucky to have a diverse project team of undergrads, Master’s students, PhD students, and a part-time project manager supporting the project.

What lessons did you learn from the RISRS project? Is there anything that worked especially well? Anything you would have done differently, with hindsight?

I have a great advisory board that provided a lot of support in recruiting participants, shaping discussions, and leading ongoing dissemination. Also, recruiting people active in professional societies was particularly helpful, especially for wide dissemination of our outcomes.

Interviewing participants was a critical part of the project. It built momentum from the start and completely shifted my understanding of retraction. Snowball recruitment of participants was valuable — we were able to recruit people from roles we would not have considered if the entire list had been pre-prepared.

One limitation is that we engaged English-speaking participants, largely from North America and Western Europe. Small publishers were also not well-represented.

Few participants wanted to share position papers — it may be that we asked too late, or that that’s too far from some participants’ work. The ones we got are valuable. I keep referring back to one position paper that provides screenshots showing how one journal indicates retraction differently on three different retracted papers. Even though the work is now formally published, these images didn’t make it into the full publication.

Moving the workshop online was time-consuming. Originally we envisioned a 1.5 day in-person workshop, but the pandemic hit about six weeks into the start of the project, so we moved to three half-day workshops. Having time between the workshop sessions enabled us to update the conversation plan: we found that two weeks were better for this than one. We learned a lot about the technology along the way: sending people to set breakout rooms was easier than asking them to choose rooms; Zoom was a barrier at that point for some participants, especially in government. Technology is still a challenge for conversation-based workshops. I’d like a tool that allows a seamless move from a Zoom-style room to a spatial chat tool that facilitates mingling. The large-scale conference software that I’ve tried at recent events basically wraps around Zoom but doesn’t provide much in the way of ad hoc conversation spaces.

The pandemic’s silver lining for me has been the attention it has brought to retractions, publication quality, and their impact on policy and subsequent research. Of the many mainstream pieces this year, my favorite is the data visualization in the Economist, ” ‘Tis but a scratch: Zombie research haunts academic literature long after their supposed demise“.

Something that this project emphasized is that sharing in-progress work is really important. Before we had anything down on paper, we started planning our first presentation, at NISO Plus 2021, which had a huge impact on the uptake of the work. We shared draft reports internally a few months before we posted them publicly. In future projects I’d like to adopt a more rigorous schedule for drafts, to get even more feedback from a wider community along the way.

The RISRS final recommendations were published recently — can you summarize them for us?

The project came up with four main recommendations:

  1. Develop a systematic cross-industry approach to ensure the public availability of consistent, standardized, interoperable, and timely information about retractions.
  2. Recommend a taxonomy of retraction categories/classifications and corresponding retraction metadata that can be adopted by all stakeholders.
  3. Develop best practices for coordinating the retraction process to enable timely, fair, unbiased outcomes.
  4. Educate stakeholders about publication correction processes including retraction and about pre- and post-publication stewardship of the scholarly record.

Based on your experience of RISRS, what do you see as the main challenges in terms of retracted research?

A very small number of publications are retracted overall, so retraction can seem like an “edge case” for information systems and workflows. There’s a lot of inconsistency. It can be hard to figure out that an item is retracted, from the publisher website or from databases and search engines. In 2012, Phil Davis wrote that there should be three intervention points: before reading, before writing, and before publishing. Interventions for readers and for authors have been slow to come to fruition. Until last year CrossMark charged publishers to participate. Zotero has been at the leading edge in alerting authors and readers to retracted papers (before reading AND before writing!) — this feature has been added to other software, for instance, Papers. Tools to help journals check bibliographies against known lists of retracted papers are getting wider use. Most commonly, those are using PubMed, which is the best source in biomedicine, and possibly Crossref data. Fewer tools are licensing data from Retraction Watch, which I consider the best database for retractions, with over 30,000 items (about three times the number in PubMed).

One direct result of the recommendations is the formation of a new NISO Working Group to work on a Communication of Retractions, Removals, and Expressions of Concern (CORREC) Recommended Practice. Can you tell us a bit more about that?

CORREC will focus on how to communicate retractions. It will not address the questions of what a retraction is or why an object is retracted. Rather it focuses on what happens once something has been retracted: what metadata should be updated, how the retraction should be displayed, and how that information should be communicated.

What other outcomes do you expect (or hope!) will come out of this work?

Ultimately, if CORREC is successful, publishers, preprint servers, and data repositories will have guidance on metadata and display standards. Search engines and databases will be able to ingest consistent metadata. All of these information providers can better serve machine and human consumers with consistent, timely information, in standard formats. Consumers will have an easier time understanding what is retracted — which should reduce inadvertent use of retracted papers in follow-on work by researchers, journalists, activists, librarians, and citizen scientists of all sorts.

As a researcher, of course, I also hope and suspect that new questions will come out of this process!

What’s next for you and the RISRS project?

Disseminating the recommendations – earlier this month I spoke at COPE and ISMTE online meetings and I’m happy to speak to other industry groups. I’m revising two papers about the recommendations: a preprint under review summarizing the whole project and a more narrowly scoped article aimed at publishing professionals. And my team is synthesizing some of the literature that we collected for the Empirical Lit bibliography — currently we’re focused on a subset of 134 papers most closely related to the spread of retracted science. And of course, participating in CORREC!

Alice Meadows

Alice Meadows

I am a Co-Founder of the MoreBrains Cooperative, a scholarly communications consultancy with a focus on open research and research infrastructure. I have many years experience of both scholarly publishing (including at Blackwell Publishing and Wiley) and research infrastructure (at ORCID and, most recently, NISO, where I was Director of Community Engagement). I’m actively involved in the information community, and served as SSP President in 2021-22. I was honored to receive the SSP Distinguished Service Award in 2018, the ALPSP Award for Contribution to Scholarly Publishing in 2016, and the ISMTE Recognition Award in 2013. I’m passionate about improving trust in scholarly communications, and about addressing inequities in our community (and beyond!). Note: The opinions expressed here are my own

Discussion

4 Thoughts on "Actions on Retractions: An Interview with Jodi Schneider"

Nice article. I was glad to learn about RISRS. The article stated that one of the four aims of RISRS was to investigate the actual harm associated with retracted research. It noted that retractions are relatively rare, and that seemed to be about the only conclusion around impact of harm. I also read the white paper you linked to, which was not more enlightening. In my opinion, the frequency of retractions is less important harm than the disproportionate influence of a few Big Impact papers. Examples of Big Impact retracted papers are easy to find, e.g., Timothy Wakefield’s papers, and shoddy science during COVID, which got huge attention in the media. I think about retractions like weather prediction. It’s less important how frequently meteorologists get the daily temperature and rain predictions wrong. It’s more important when they get the rare, big things wrong like where hurricanes are going to land and how strong they will be. Though they are rare, they have huge impacts of harm. Didn’t RISRS discover anything else about the harm of retracted research?

I am so glad that preprints are included in this initiative!

I would also like to see searching referenced articles for retractions incorporated into literature search strategies for serious research.

I completely agree! RedacTek advertises that they help “validate your sources to the third generation” – meaning that for a given article you can look at whether they are citing any retracted papers (and whether THOSE articles cited any retracted papers). They seem to be focusing first on biomedicine – they’ve exhibited at MLA.

On the more theoretical side, my PhD student and I have been looking into “knowledge dependence” – WHEN citing a retracted paper actually impacts follow-on work. In some cases, such as review articles, there’s an obvious potential impact. In general, we think that understanding the argumentation structure of the paper, as well as what is said about the cited paper, matters. Our paper on this is called Towards knowledge maintenance in scientific digital libraries with the keystone framework. I’d love to hear what you think!

@Michael

You’re right that there are multiple kinds of harms. And thanks for pointing out that we need to stress this more in reports. That’s helpful feedback and particularly timely as we focus on revising for Research Integrity and Peer Review.

Misinformation to the public is one of the most vivid harms. The Wakefield example is frequently discussed in this context. The best evidence comes from case studies, particularly on media coverage (e.g., [1]). For instance, news media may cover the research and never cover the retraction.

Continued circulation INSIDE the scientific community – without awareness of the retraction – is also a problem. Two COVID-19-related retractions discredited within a month of publication (Surgisphere case) have been cited over 1000+ times each, and often without awareness of the retraction (about 50% in one investigation [2]). That’s actually WAY better than the typical situation (and I suspect is related to the publicity that case got): in work for this project, a PhD student and I found that over 94% of post-retraction citations in biomedicine DID NOT show awareness of the retraction [3].

Other harms include:
(1) Clinical impact/human harms [3]
(2) Economic costs of retraction or misconduct [4]
(3) Impact of retraction on education/clinical training [5]
(4) Impact on downstream literature (e.g. systematic reviews and meta-analyses) [6]
(5) Impact on a particular research field [7]

Sample citations below – and more in my Empirical Lit bibliography under “Impacts” and “Perceptions and discussion of retracted papers; Altmetrics–News”
https://infoqualitylab.org/projects/risrs2020/bibliography/

My next project focuses on how journalists, Wikipedia editors, activists, and librarians serve as “knowledge brokers” for policy-relevant scientific and technological information. I’ll be trying to understand what quality signals knowledge brokers look for as well as to understand why information that scientists have discredited continues to circulate, in three case studies: COVID-19, climate change, and AI and labor. I’d welcome thoughts on that!

[1] A case study of a retracted systematic review on interactive health communication applications: Impact on media, scientists, and patients
Roy F. Rada. 2005. Journal of Medical Internet Research, 7 (2), e21. 2005. http://doi.org/10.2196/jmir.7.2.e18

[2] Disgraced COVID-19 studies are still routinely cited.
Charles Piller. 2021. Science, 371 (6527), 331-332. http://doi.org/10.1126/science.371.6527.331

[3] Continued Use of Retracted Papers: Temporal Trends in Citations and (Lack of) Awareness of Retractions Shown in Citation Contexts in Biomedicine.
Tzu-Kun Hsiao and Jodi Schneider. 2021. Quantitative Science Studies (online first) http://doi.org/10.1162/qss_a_00155

[4] Retractions in the medical literature: how many patients are put at risk by flawed research? Grant Steen. (2011). Journal of Medical Ethics, 37 (11), 688-692. http://doi.org/10.1136/jme.2011.043133

[5] Research misconduct oversight: Defining case costs.
Elizabeth Gammon and Luisa Franzini (2013). Journal of Health Care Finance, 40 (2), 75-99. Self-archived version: https://mdsoar.org/handle/11603/4069

[6] Undisclosed conflicts of interest in German-language textbooks of anesthesiology, critical care, and emergency medicine.
Christian J. Wiedermann (2018). Zeitschrift für Evidenz, Fortbildung und Qualität im Gesundheitswesen, 139, 53-58. http://doi.org/10.1016/j.zefq.2018.10.004

[7] Inclusion of nursing trials in systematic reviews after they have been retracted: Does it happen and what should we do?
Richard Gray, Amal Al-Ghareeb, Jenny Davis, Lisa McKenna, Stav Amichai Hillel

[8] Retractions
Pierre Azoulay, Jeffrey L Furman, Joshua L Krieger, and Fiona Murray. Review of Economics and Statistics, 97(5), 1118-1136. 2015. http://doi.org/10.1162/REST_a_00469

Comments are closed.