Trust Me (book)
Trust Me (book) (Photo credit: Wikipedia)

Integrity has always been a – perhaps the – critical element in research. It’s essential for everyone reading or using scholarly information to be certain that it is authoritative and trustworthy, all the more so in a digital environment, where it isn’t always easy to distinguish good research from bad research.

Over the years, the scholarly community has developed a number of tools to help with this, with peer review being the most obvious. But in a fast-changing world – where the number of new researchers is expanding rapidly,* the amount of cross-institution and/or international collaboration is at an all-time high,** and the volume of papers published continues to increase year by year*** — how confident can we really be about the quality of the articles we read? Is there more unethical and fraudulent research being published these days, or are we simply better at detecting it? What is the most effective way of tackling misconduct – the carrot or the stick? And, if an article needs to be retracted, what is the best way to do this? These were just some of the issues raised at the recent 3rd World Congress on Research Integrity in Montreal.

First things first – how big a problem is research misconduct? The answer depends on what are you measuring and how are you measuring it. Nicholas Steneck (University of Michigan) said that 1-3% of all research articles are affected by misconduct, and quoted an estimated figure of 80,000 plagiarism cases per year. But those numbers include everything from minor ‘offences’ (mostly either unintentional or misguided) to cases of major and intentional fraud. Another possible measurement was highlighted by David Wright, of the US Office of Research Integrity (ORI), who told the conference that the number of allegations of research misconduct reported to them doubled in 2012, and that the cases themselves are also getting much more complex. Monitoring the number of retractions may also be a helpful indicator – the estimate for 2012 is around 400, about on a par with 2011, according to Retraction Watch (Ivan Oransky of Retraction Watch was another speaker at the Congress).

Opinions also differed on the best approach to preventing misconduct. For example, Michael Farthing (University of Sussex, England) believes that a speed camera approach works best (i.e., if researchers know there’s a good chance they may be audited, they are less likely to offend in the first place). However, Makoto Asashima of a funding organization, the Japan Society for the Promotion of Science, announced that his organization has recently increased the penalty for misconduct from five years of restricted eligibility for grants and other funding to 10 – very much a ‘stick’ approach. Beth Fischer of the University of Pittsburgh highlighted the fact that, by its requirement to publish, the current tenure system can effectively reward researchers for bad behavior; instead, she suggested, we must find ways to reward people for good behavior.

Irrespective of whether you choose the carrot or the stick, there are still challenges relating to what the rewards or punishments should be, and who should be responsible for implementing them – not to mention how to identify offenders in the first place. Although tools such as CrossCheck have helped enormously, wrongdoers still slip through the system, sometimes for years – as Oransky noted, the record for longest time from article publication to retraction is a staggering 27 years! All of this is especially complex when dealing with collaborations across multiple institutions and/or countries, because there are currently no national or global agreements about appropriate rewards or deterrents and, other than through meetings such as the Congress, few opportunities for stakeholders to discuss these issues.

There are some very helpful guidelines, including the outputs from these meetings – see the Singapore statement on research integrity (2010) and the forthcoming Montreal statement from this year – as well as materials produced by the Council on Publication Ethics (COPE). But there is currently no consistency in how these are applied or enforced. Encouragingly, the Global Research Council (a new organization comprising the heads of 70 funding organizations globally) is currently looking at establishing high-level principles of research integrity which, if adopted, could provide some real momentum. However, as several speakers pointed out, even funders don’t always feel they have the power to enact appropriate sanctions because they are not the wrongdoer’s employer.

There’s another whole set of questions around retractions – when to retract rather than correct; how much information to include about the retraction; what to do about other versions of a paper (especially in an increasingly OA world). As Veronique Kiermer (Nature) pointed out, publishers have to try to balance the needs of those authors who have made an honest mistake (and should be encouraged to improve rather than stigmatized), with the need to ensure that the data and other information we publish is accurate. As a result, there are no consistent standards across all publishers, though again the COPE guidelines and other tools have proved invaluable.

So, while there still seem to be more questions than answers, at least those questions are being asked – and being asked by stakeholders from across the global communications community. Perhaps a forthcoming survey being carried out by the University of Tennessee and the CIBER group, as part of a study of authority and trustworthiness in the digital age, will provide a few more answers on what researchers care about . More on that in a future post.

*25% increase between 2002-2007 (Tony Mayer, speaking at the 3rd World Congress on Research Integrity)
**Just 26% of articles now have a single author, while 35% come from international collaborations (Nature Publishing Index 2012)
***The STM Report 2012 (page 23)

Enhanced by Zemanta
Alice Meadows

Alice Meadows

I am a Co-Founder of the MoreBrains Cooperative, a scholarly communications consultancy with a focus on open research and research infrastructure. I have many years experience of both scholarly publishing (including at Blackwell Publishing and Wiley) and research infrastructure (at ORCID and, most recently, NISO, where I was Director of Community Engagement). I’m actively involved in the information community, and served as SSP President in 2021-22. I was honored to receive the SSP Distinguished Service Award in 2018, the ALPSP Award for Contribution to Scholarly Publishing in 2016, and the ISMTE Recognition Award in 2013. I’m passionate about improving trust in scholarly communications, and about addressing inequities in our community (and beyond!). Note: The opinions expressed here are my own

Discussion

10 Thoughts on "Research Integrity – More Questions Than Answers"

Just curious: aside from proof of outright plagiarism, is there any ground for retraction of an article in the humanities? I suppose, for an article that quoted interviews, there could be fabrication of these, but what else? Would misquoting a source count if this were unintentional? Misquoting in minor ways happens all the time, so I suppose it would be a matter of egregiousness.

I’m reminded of when “This American Life” retracted an episode because the reporting was proven to be mostly fabricated.

The contrast between 400 retractions, some due to honest errors, and an alledged 80,000 cases of plagiarism suggests the latter may be overstated. Perhaps plagiarism is the wrong word. Or if these are minor why do we care? It sounds like more of an editorial issue than an integrity issue, especially if they do not affect the results being reported. My guess is that they mostly occur in the initial problem description. Any data on this?

Many journals are now using plagiarism-detecting software, which gives a “score” for each paper. The large number of cases might come from choosing an arbitrary score number and calling it plagiarism. The “score” is additive, and a lot of it comes from using the same words to describe methods.

The softwares are getting better at distinguishing real plagiarism, but they have not arrived as yet.

I don’t know where Nick Steneck got his numbers of plagiarism cases from I’m afraid (he may have said but I didn’t make a note). However, retractions are not undertaken lightly, so I’m not surprised that the figure is very low compared with plagiarism overall.

Good post, good update. I’ve brought it to the attention of some of my colleagues who are climate change deniers on the basis of some unfortunate and hgh-profile article retractions and, a propos of today’s podcast, to point out how the scholarly community is self-policing….even if that means policing government agencies.

A question.

Hypothetically, imagine that ‘plagiarism’ is detected by relevant software, then what is better?:
To beat the authors up and ‘report them’.
Or to point out the error, to engage and to educate and change behaviour?
And what ‘levels’ of plagiarism require different actions?
or do they??

Just curious as to what people think . . . .

I agree that the COPE guidelines are very helpful, but I think we all have a lot more work to do in terms of teaching authors, especially young/early career researchers, about ethical issues. There seems to be very little formal education on this, either in developed or emerging economies.

Are you concerned about research integrity or publications integrity? Because these are two very separate issues. Insofar as research, the greatest and foremost breach of integrity is non reporting as it leads to publication bias, which is one of the biggest headaches for systematic reviews.

Comments are closed.