Most readers are experienced with image manipulation. We adjust the lighting, color tint, and contrast in our photographs. More experienced users will crop, resize, touch up a mole or some wrinkles, maybe even remove an ex-partner from a photograph. Some will go further, cutting-and-pasting parts of other images, like the head of Geoffrey Bilder on the body of a runway model. Image alteration is an industry standard in the world of fashion, where models are used as illustrations of new styles, but completely unacceptable in the newsroom, where photographs are intended to represent reality.
The laboratory scientist is akin to the photojournalist, representing findings from gels and blots as they are — not stylized illustrations — and yet, there is a real temptation to beautify the data. Only, some of these manipulations are considered to distort the original data and may be classified as scientific misconduct, wrote Mike Rossner and Ken Yamada 12 years ago in The Journal of Cell Biology (JCB). Nearly fifteen yeas ago, JCB instituted the first editorial policy on what is considered acceptable (and unacceptable) image manipulation — a policy that quickly became a model for other journals — and, in 2002, began screening all images on accepted manuscripts prior to publication.
Unfortunately, the problem of inappropriate image manipulation has only gotten worse since 2002, reports Elisabeth Bik, a microbiologist at Stanford University, in a paper (“The Prevalence of Inappropriate Image Duplication in Biomedical Research Publications”) posted to bioRxiv on April 20. Her coauthors, Arturo Casadevall and Ferric Fang, are editors of two microbiology journals themselves.
Visually screening 20,621 scientific papers published in 40 scientific journals from 1995 to 2014, the researchers detected 782 papers (3.8%) that included at least one figure containing an inappropriate image manipulation. The vast majority of these images were found published in PLOS ONE. While the size and dates of sampling were different across journals, the detection rate ranged from 0.3% (1 in 329 papers published in JCB) to 12.4% (11 in 89) in the International Journal of Oncology.
Not surprisingly, authors of a paper containing an inappropriate image manipulation were much more likely to have published additional papers containing image problems, Bik reports. Based on their frequency of publication, papers from Chinese and Indian authors were nearly twice as likely to contain image problems, while papers from UK, Japanese, German, and Australian authors were far less likely to contain image problems.
Bik also reports that cases of image manipulation problems jumped in 2003 and speculates that the mainstream use of image software, improved image quality, and author-prepared images may be explanations. I’d like to add that many publishers implemented electronic manuscript submission software around 2002, making it suddenly cheaper and faster for some authors to submit manuscripts to Western journals.
The frequency of image manipulation in the scientific record has been the focus of this paper in the media (viz. Retraction Watch and Nature), although we should be aware that the true rate may be higher. A paper published last year on image duplication in cancer journals reported rates around 25%. Bik required two other trained microbiologists to agree on her evaluation. In addition, she used only visual detection and rudimentary software. To me, the real issue arising from this paper is not about the exact frequency of image manipulation in the scientific literature, but how editors and publishers will respond and what actions they will take to prevent future problems.
The Committee on Publication Ethics (COPE) does not have a guideline on image manipulation but strongly advises journals to specify their own policies on acceptable practice. While some journals have detailed policies on what constitutes acceptable and unacceptable image manipulation, others have vague policies, or worse, none at all. In researching publisher policies for this piece, I discovered that many publishers have policies on image manipulation; however, these policies are often buried on a page about ethics and are not found in the submission instructions to authors. In these cases, an author may have no knowledge of an existing policy when submitting a manuscript.
Working from Bik’s list of journals (Supplementary Table 1), the most detailed policy I could find was on the PLOS ONE submission guidelines, which includes a section entirely devoted to blots and gels. BioMed Central (BMC) simply endorses the guidelines from the Journal of Cell Biology. American Society for Microbiology journals have strong but more general guidelines on image manipulation, as do PNAS, Wiley and Elsevier, although the latter two list these guidelines on a separate part of their website. I could not find a statement for Nature Publishing Group journals. One Springer journal included a overly general statement about not fabricating or manipulating one’s data. Hindawi journals contained no information on the instructions to authors, but instead, a general statement on data fabrication and falsification that included a stern, but ambiguous, warning about author sanctions. I could not find any mention of image manipulation policies for Taylor & Francis although the publisher does provide an ethics guide for its authors.
Given that many of the image manipulations identified in this study clearly violated publisher policies, I expect to see hundreds of corrections and retractions issued in the coming months. While Bik did not publish the list of offending articles with her paper, she did report all 781 instances to the current journal editors. To date, 6 paper have been retracted and 71 will be (or have been) corrected; in 4 cases, the editor decided that no action was required. For the remaining 700 papers, Bik has not been notified of any action (personal email correspondence). Independently, I contacted several editors. The JCB has been taking its single instance very seriously, as did the editors of mBio and Infection and Immunity. I have not heard back yet from PLOS, which published nearly half of all manipulated images detected in Bik’s study.
COPE, of which the publishers mentioned above were all members, offers a flowchart on What to do if you suspect fabricated data in a published manuscript. One does not need to be an editor to understand how much time and human resources are required to investigate and correct suspected image manipulation post-publication. Many readers of this blog actively perform similar duties every day, and one does not need to look back too far to uncover investigations that revealed hundreds of papers across dozens of journals that needed to be retracted because of fraudulent peer review.
Mike Rossner takes scientific images seriously, so seriously, that he started a company dedicated to to the issue. He considers images as data, no different from numbers or any other kind of evidence submitted in support of a scientific claim. Authors, he argues, need to resist the temptation to tamper with their evidence, even if their intentions are not deceitful.
As to the effort that goes into screening every manuscript before publication, Rosner replied:
It’s an effort that journal editors should be willing to take on to protect the published record. In general, it’s a lot easier to deal with image problems in a paper before publication rather than after.