Last year, the National Institutes of Health and the field of neuroscience were rocked, once again, by a serious science fraud case that revolved around image manipulation. Dr. Eliezer Masliah, formerly director of the Division of Neuroscience at the National Institute on Aging, was fired for systematically fraudulently manipulating research images over a quarter of a century. While the scale of the damage done is not easy to measure, we know that researchers trying to build on Dr. Masliah’s work have gone down blind alleys, unwittingly wasted perhaps millions in research funding, and at least one drug went through expensive clinical trials, but was not approved in several countries, because its efficacy is highly questionable.

This is far from being the only example of images being falsified. In 2022, Sylvain Lesné’s work on the role of a specific oligomer of Amyloid was similarly discredited. This particular case was something of a double-whammy as it not only represents a lot of wasted time and research funding, but also resulted in fallout that risked undermining other perfectly sound work in Alzheimer’s research. A cursory search for similar cases turned up Santosh Katiyar, who published manipulated images that purported to prove the anti-cancer effects of grape seed extract; Rijun Gui’s highly-cited work on fluorescent nanocrystals and quantum dots which was retracted for image integrity issues; and David Panka, who was caught having manipulated images in three papers and censured by the US Office of Research Integrity. The list goes on for as long as you have the time and patience to search Retraction Watch and PubPeer. Don’t just take it from me, Dr Dror Kolodkin-Gal’s guest post from last year quotes the US Office of Research Integrity in reporting that 68 percent of all cases it opened between 2007 and 2008 were about image manipulation. As Elisabeth Bik put it in an opinion piece for The New York Times, ‘Science has a nasty photoshop problem’

Photo editor user interface on tablet computer

How publishers, equipment manufacturers and institutions are responding

So it’s not surprising that concerns about image integrity have gained a lot of attention recently. In 2004, Mike Rossner (Managing Editor) and Kenneth Yamada (Editor) of the Journal of Cell Biology wrote what is now considered a seminal article on how editors should safeguard image integrity. They noted that, at the time, many journals said little or nothing about image alterations in their author guidelines. In some cases, guidelines stated that the relationship between the original image and the published image must be maintained, and that the specific nature of any enhancements or manipulations must be disclosed, often in the figure legends.

Today, stricter guidance and requirements around image handling are gradually becoming more common. Many of them follow the set of best practice recommendations for image integrity that STM published in 2021. In particular, many journals now require original, uncropped, unedited versions of blots, gels, and microscopy. For example, Wiley journals require authors to submit uncropped versions of blots as supplementary data and to keep their own original copies for five years. As examples, Brain Communications (published by Oxford University Press), and Reproduction and Fertility (published by the Society for Reproduction and Fertility) both have similar requirements.

While there’s been a lot of progress, there’s also still some variance across publishers and journals. For example, EMBO Press asks that source files and data for a range of experiment types be either uploaded as supplementary data or as linked data in a repository, even going so far as to specify the file structure of the directories the raw data is in. PLOS asks for all original blot images to be compiled in a single PDF and placed in a repository, linked with the DOI of the article for cross-reference. The Journal of Molecular Endocrinology and the Journal of Biological Chemistry don’t specifically require raw images to always be publicly available, but warn authors that they may be asked to provide them. 

Equipment manufacturers have also been following these emerging requirements. Licor has a page on best practices  and a publication guidelines checklist on their website, as does ThermoFisher. The most interesting example I’ve seen is Cytiva, which is part of global life sciences company Danaher, where they are already producing gel and blot scanners and biomolecular imagers that sign images with an encrypted hash. The images can be verified as original using free software that can be downloaded from their website.

Research institutions themselves must take a leading role here. One sentiment I’ve heard repeatedly while talking to people about research integrity is the concern that institutions haven’t tackled this issue aggressively enough and are privately accused of being unwilling to take strong action. While this is an understandable position, with many institutions having relatively vague policies on data falsification, things appear to be changing. The UK Research Integrity Office (UKRIO) links to resources including seminal papers, webinars, and infographics on best practices. Glasgow University, for example, has published guidelines around image manipulation that are broadly similar to the ones that many publishers have, including advice to keep raw images “in case you are asked to explain exactly what changes have been applied”. In the US, the Office of Research Integrity at the Department of Health and Human Services provides resources and has published an infographic on acceptable and unacceptable image changes, which is linked from Cold Spring Harbor Laboratory’s page on image integrity. 

Tackling this problem will require cross-stakeholder collaboration

Over the past few months, I’ve been working with  STM Solutions to assess the feasibility of a technical solution to the detection of falsified images in research articles. The tl;dr being that the technology already exists. An eventual system could involve the signing of images in a similar way to how secure web pages are signed, which could then be verified by a publisher or potentially an institution doing an investigation. A good analogy for anybody who’s seen a police drama might be the chain of custody used by the criminal justice system that provides documentation of where evidence came from, who handled it and what they did with it. Some relevant standards are already available and technical initiatives underway. For example, the Coalition for Content Provenance and Authenticity (C2PA) has already developed a standard for signing images aimed at the digital news ecosystem, as Todd Carpenter discussed in his post last week, and the W3C credible web community group is looking at the broader challenges of untrustworthiness on the web.

That noted, this will not be a purely technical problem to solve; it will eventually need the participation and support of everyone – funders, institutions, publishers, learned societies, researchers, instrument manufacturers, standards bodies, and technologists. There will, for example, be the need for some kind of organization or governance structure that acts as the trust provider and be stakeholder neutral. It will also be vitally important not to create administrative burden or workflow overhead, meaning that anything built would need to bolt directly into existing workflows. Perhaps the most challenging aspect of all will be the breadth of collaboration required. While multi-stakeholder governed research infrastructure is not new (just look at Crossref, Datacite, and ORCID), this innovation will require cooperation between an even broader set of stakeholders. Publishers and instrument manufacturers haven’t previously collaborated on very much, although as the examples of Licor and Cytiva that I mentioned earlier suggest, perhaps that isn’t too distant of an idea.

In the meantime, we need to take some first steps. Since the report was launched in December, I have presented it at a panel discussion with Neil Jefferies of the Open Preservation Foundation and the Bodleian Library in Oxford, Olivia Nippe of Elsevier, and Carmen Lozano Abellán of instrument manufacturer Cytiva as part of the STM Integrity Day. My colleague Fiona Murphy and I, along with Joris van Rossum from STM Solutions, also facilitated a workshop on the idea at the Researcher to Reader Conference in London in February. The topic will also be discussed during a panel discussion at the STM US Annual Conference in April.We’ve also been speaking to a number of publishers, librarians, research integrity experts and technologists to better understand how we might go about things. 

Right now, we’re working on gathering potential stakeholders for discussions and considering a pilot. If we can gather enough interest, and obtain funding, the pilot would likely involve a small number of publishers and institutions with a simple, lightweight workflow aimed at demonstrating that certification at the point of data collection is possible, combined with validation at the point of manuscript review. If you work for a publisher or institution that may be interested in participating in a pilot, or just providing input, please get in touch.

Phill Jones

Phill Jones

Phill Jones is a co-founder of MoreBrains Consulting Cooperative. MoreBrains works in open science, research infrastructure and publishing. As part of the MoreBrains team, Phill supports a diverse range of clients from publishers and learned societies to institutions and funders, on a broad range of strategic and operational challenges. He's worked in a variety of senior and governance roles in editorial, outreach, scientometrics, product and technology at such places as JoVE, Digital Science, and Emerald. In a former life, he was a cross-disciplinary research scientist at the UK Atomic Energy Authority and Harvard Medical School.

Discussion

5 Thoughts on "Tackling Science’s ‘Nasty Photoshop Problem’"

I am puzzled that your article fails to mention Sholto David and his successful efforts to unearth photo manipulations. He and Bik (mentioned just once in your article) should be awarded medals for their sleuthing. The sins pointed out by Bik and David are far more serious than anything Francesca Gino is accused of.

Hi Paul,

You’re certainly correct that the work that researchers like Sholto David, Elizabeth Bik and many others who have drawn attention to these issues over the years has been invaluable in raising awareness. As you note, I linked out to Dr Bik’s article in the New York times.
My apologies if you didn’t think I gave that aspect of the story sufficient prominence. I felt it important to focus on the systemic scale of the issue and what might be done to improve the situation.

Finally the instrument providers are getting in the game. This has been the way forward for 20+ years now and it’s hard to believe it’s taken them this long.

I agree that instrument manufacturers have a critical role to play in solving this issue.
I believe that this will take a truly multi-stakeholder approach including publishers, learned societies, standards bodies, institutions, funders and instrument makers.

Yes, multi-stakeholder collaboration will be key and there are several win-win scenarios for instrument makers in the collaboration. The emerging C2PA standard* may provide some clarity about how to engage in a way that’s useful, None of the companies you mentioned are part of the initiative, but the photography arms of several companies that also produce lab instrumentation such as microscopes are. Hopefully this provides not just a way to solve the image manipulation issue, but to improve reproducibility of research overall by facilitating reproducible and shareable workflows for processing all kinds of experimental data.

* discussed recently by Chef Carpenter: https://scholarlykitchen.sspnet.org/2025/03/13/research-integrity-content-provenance-and-c2pa/

Leave a Comment