Integrity has always been a – perhaps the – critical element in research. It’s essential for everyone reading or using scholarly information to be certain that it is authoritative and trustworthy, all the more so in a digital environment, where it isn’t always easy to distinguish good research from bad research.
Over the years, the scholarly community has developed a number of tools to help with this, with peer review being the most obvious. But in a fast-changing world – where the number of new researchers is expanding rapidly,* the amount of cross-institution and/or international collaboration is at an all-time high,** and the volume of papers published continues to increase year by year*** — how confident can we really be about the quality of the articles we read? Is there more unethical and fraudulent research being published these days, or are we simply better at detecting it? What is the most effective way of tackling misconduct – the carrot or the stick? And, if an article needs to be retracted, what is the best way to do this? These were just some of the issues raised at the recent 3rd World Congress on Research Integrity in Montreal.
First things first – how big a problem is research misconduct? The answer depends on what are you measuring and how are you measuring it. Nicholas Steneck (University of Michigan) said that 1-3% of all research articles are affected by misconduct, and quoted an estimated figure of 80,000 plagiarism cases per year. But those numbers include everything from minor ‘offences’ (mostly either unintentional or misguided) to cases of major and intentional fraud. Another possible measurement was highlighted by David Wright, of the US Office of Research Integrity (ORI), who told the conference that the number of allegations of research misconduct reported to them doubled in 2012, and that the cases themselves are also getting much more complex. Monitoring the number of retractions may also be a helpful indicator – the estimate for 2012 is around 400, about on a par with 2011, according to Retraction Watch (Ivan Oransky of Retraction Watch was another speaker at the Congress).
Opinions also differed on the best approach to preventing misconduct. For example, Michael Farthing (University of Sussex, England) believes that a speed camera approach works best (i.e., if researchers know there’s a good chance they may be audited, they are less likely to offend in the first place). However, Makoto Asashima of a funding organization, the Japan Society for the Promotion of Science, announced that his organization has recently increased the penalty for misconduct from five years of restricted eligibility for grants and other funding to 10 – very much a ‘stick’ approach. Beth Fischer of the University of Pittsburgh highlighted the fact that, by its requirement to publish, the current tenure system can effectively reward researchers for bad behavior; instead, she suggested, we must find ways to reward people for good behavior.
Irrespective of whether you choose the carrot or the stick, there are still challenges relating to what the rewards or punishments should be, and who should be responsible for implementing them – not to mention how to identify offenders in the first place. Although tools such as CrossCheck have helped enormously, wrongdoers still slip through the system, sometimes for years – as Oransky noted, the record for longest time from article publication to retraction is a staggering 27 years! All of this is especially complex when dealing with collaborations across multiple institutions and/or countries, because there are currently no national or global agreements about appropriate rewards or deterrents and, other than through meetings such as the Congress, few opportunities for stakeholders to discuss these issues.
There are some very helpful guidelines, including the outputs from these meetings – see the Singapore statement on research integrity (2010) and the forthcoming Montreal statement from this year – as well as materials produced by the Council on Publication Ethics (COPE). But there is currently no consistency in how these are applied or enforced. Encouragingly, the Global Research Council (a new organization comprising the heads of 70 funding organizations globally) is currently looking at establishing high-level principles of research integrity which, if adopted, could provide some real momentum. However, as several speakers pointed out, even funders don’t always feel they have the power to enact appropriate sanctions because they are not the wrongdoer’s employer.
There’s another whole set of questions around retractions – when to retract rather than correct; how much information to include about the retraction; what to do about other versions of a paper (especially in an increasingly OA world). As Veronique Kiermer (Nature) pointed out, publishers have to try to balance the needs of those authors who have made an honest mistake (and should be encouraged to improve rather than stigmatized), with the need to ensure that the data and other information we publish is accurate. As a result, there are no consistent standards across all publishers, though again the COPE guidelines and other tools have proved invaluable.
So, while there still seem to be more questions than answers, at least those questions are being asked – and being asked by stakeholders from across the global communications community. Perhaps a forthcoming survey being carried out by the University of Tennessee and the CIBER group, as part of a study of authority and trustworthiness in the digital age, will provide a few more answers on what researchers care about . More on that in a future post.
*25% increase between 2002-2007 (Tony Mayer, speaking at the 3rd World Congress on Research Integrity)
**Just 26% of articles now have a single author, while 35% come from international collaborations (Nature Publishing Index 2012)
***The STM Report 2012 (page 23)