Most readers are experienced with image manipulation. We adjust the lighting, color tint, and contrast in our photographs. More experienced users will crop, resize, touch up a mole or some wrinkles, maybe even remove an ex-partner from a photograph. Some will go further, cutting-and-pasting parts of other images, like the head of Geoffrey Bilder on the body of a runway model. Image alteration is an industry standard in the world of fashion, where models are used as illustrations of new styles, but completely unacceptable in the newsroom, where photographs are intended to represent reality.
The laboratory scientist is akin to the photojournalist, representing findings from gels and blots as they are — not stylized illustrations — and yet, there is a real temptation to beautify the data. Only, some of these manipulations are considered to distort the original data and may be classified as scientific misconduct, wrote Mike Rossner and Ken Yamada 12 years ago in The Journal of Cell Biology (JCB). Nearly fifteen yeas ago, JCB instituted the first editorial policy on what is considered acceptable (and unacceptable) image manipulation — a policy that quickly became a model for other journals — and, in 2002, began screening all images on accepted manuscripts prior to publication.
Unfortunately, the problem of inappropriate image manipulation has only gotten worse since 2002, reports Elisabeth Bik, a microbiologist at Stanford University, in a paper (“The Prevalence of Inappropriate Image Duplication in Biomedical Research Publications”) posted to bioRxiv on April 20. Her coauthors, Arturo Casadevall and Ferric Fang, are editors of two microbiology journals themselves.
Visually screening 20,621 scientific papers published in 40 scientific journals from 1995 to 2014, the researchers detected 782 papers (3.8%) that included at least one figure containing an inappropriate image manipulation. The vast majority of these images were found published in PLOS ONE. While the size and dates of sampling were different across journals, the detection rate ranged from 0.3% (1 in 329 papers published in JCB) to 12.4% (11 in 89) in the International Journal of Oncology.
Not surprisingly, authors of a paper containing an inappropriate image manipulation were much more likely to have published additional papers containing image problems, Bik reports. Based on their frequency of publication, papers from Chinese and Indian authors were nearly twice as likely to contain image problems, while papers from UK, Japanese, German, and Australian authors were far less likely to contain image problems.
Bik also reports that cases of image manipulation problems jumped in 2003 and speculates that the mainstream use of image software, improved image quality, and author-prepared images may be explanations. I’d like to add that many publishers implemented electronic manuscript submission software around 2002, making it suddenly cheaper and faster for some authors to submit manuscripts to Western journals.
The frequency of image manipulation in the scientific record has been the focus of this paper in the media (viz. Retraction Watch and Nature), although we should be aware that the true rate may be higher. A paper published last year on image duplication in cancer journals reported rates around 25%. Bik required two other trained microbiologists to agree on her evaluation. In addition, she used only visual detection and rudimentary software. To me, the real issue arising from this paper is not about the exact frequency of image manipulation in the scientific literature, but how editors and publishers will respond and what actions they will take to prevent future problems.
The Committee on Publication Ethics (COPE) does not have a guideline on image manipulation but strongly advises journals to specify their own policies on acceptable practice. While some journals have detailed policies on what constitutes acceptable and unacceptable image manipulation, others have vague policies, or worse, none at all. In researching publisher policies for this piece, I discovered that many publishers have policies on image manipulation; however, these policies are often buried on a page about ethics and are not found in the submission instructions to authors. In these cases, an author may have no knowledge of an existing policy when submitting a manuscript.
Working from Bik’s list of journals (Supplementary Table 1), the most detailed policy I could find was on the PLOS ONE submission guidelines, which includes a section entirely devoted to blots and gels. BioMed Central (BMC) simply endorses the guidelines from the Journal of Cell Biology. American Society for Microbiology journals have strong but more general guidelines on image manipulation, as do PNAS, Wiley and Elsevier, although the latter two list these guidelines on a separate part of their website. I could not find a statement for Nature Publishing Group journals. One Springer journal included a overly general statement about not fabricating or manipulating one’s data. Hindawi journals contained no information on the instructions to authors, but instead, a general statement on data fabrication and falsification that included a stern, but ambiguous, warning about author sanctions. I could not find any mention of image manipulation policies for Taylor & Francis although the publisher does provide an ethics guide for its authors.
Given that many of the image manipulations identified in this study clearly violated publisher policies, I expect to see hundreds of corrections and retractions issued in the coming months. While Bik did not publish the list of offending articles with her paper, she did report all 781 instances to the current journal editors. To date, 6 paper have been retracted and 71 will be (or have been) corrected; in 4 cases, the editor decided that no action was required. For the remaining 700 papers, Bik has not been notified of any action (personal email correspondence). Independently, I contacted several editors. The JCB has been taking its single instance very seriously, as did the editors of mBio and Infection and Immunity. I have not heard back yet from PLOS, which published nearly half of all manipulated images detected in Bik’s study.
COPE, of which the publishers mentioned above were all members, offers a flowchart on What to do if you suspect fabricated data in a published manuscript. One does not need to be an editor to understand how much time and human resources are required to investigate and correct suspected image manipulation post-publication. Many readers of this blog actively perform similar duties every day, and one does not need to look back too far to uncover investigations that revealed hundreds of papers across dozens of journals that needed to be retracted because of fraudulent peer review.
Mike Rossner takes scientific images seriously, so seriously, that he started a company dedicated to to the issue. He considers images as data, no different from numbers or any other kind of evidence submitted in support of a scientific claim. Authors, he argues, need to resist the temptation to tamper with their evidence, even if their intentions are not deceitful.
As to the effort that goes into screening every manuscript before publication, Rosner replied:
It’s an effort that journal editors should be willing to take on to protect the published record. In general, it’s a lot easier to deal with image problems in a paper before publication rather than after.
Discussion
16 Thoughts on "Image Manipulation: Cleaning Up the Scholarly Record"
This is a tricky one because it builds on long-established analog world practices. II was doing ethnographic fieldwork in a university bioscience laboratory about 15 years ago and one of the postdocs showed me a gel with the results of interaction between bacteria and an antibiotic. I was told that it established that the experiment worked and that it would be good enough for an industrial lab. However, the postdoc was going to re-do the gel because it did not look pretty enough to photograph and publish. The quality of the representation in the photograph would, I learned, be taken as an indicator of a level of professional skill that might be relevant when this person was looking for their next position. Image manipulation just moves this traditional process one step down the line.
I think what is needed is a definition of image manipulation. Is the image manipulated for clarity but does not change the result? Is that manipulation. We are dealing with some rather technical photography. Photography with the goal to represent what is but which may not represent what was actually seen. I think this is why line drawings or non photographic art may be better than photography. I can recall in days of old when medical or life science artists were highly paid and budgets for their work were large. I worked on a surgical atlas in which each photograph for purposes of clarity was accompanied by an art rendering. Was the art considered manipulation?
Lastly, some labs still use film and then scan the negative in order to make a digital file. In the process detail can be lost and then the scan file manipulated to regain or enhance what was lost. Perhaps it should be required to state the camera, lens, and other data that was used to capture the image.
In my (less competitive than cancer research) research field it was standard practice to use line drawings or even paintings in papers and monographs. And I would often ask the illustrator to emphasise certain features that would be useful to readers. Now that digital photographic images are much easier to make, I regularly compensate for the limitations of my own equipment by using software to sharpen the image, or to adjust contrast and colour balance. The precise nature of the image is not crucial, and I have not felt guilty about this, nor felt any need to tell an editor what I have done with the image. Am I being immoral?
The COPE case on Image Manipulation as a General Practice (http://publicationethics.org/case/image-manipulation-general-practice) tackles this question from the perspective of intent: Was the image intended to report original data or was it used to “illustrate” a finding. The group does not take sides, but concludes that image-as-illustration must be clearly labeled as such in the paper to avoid ambiguity.
Almost every photograph needs some manipulation to make it suitable for print. Perhaps the best solution for key images is to include the original, straight-out-of-the-camera source file in the supplementary material.
JCB spells out what is (and isn’t) acceptable pretty clearly in their instructions to authors (see: http://jcb.rupress.org/site/misc/ifora.xhtml)
- No specific feature within an image may be enhanced, obscured, moved, removed, or introduced.
- The grouping of images from different parts of the same gel, or from different gels, fields, or exposures, must be made explicit by the arrangement of the figure (i.e., using dividing lines) and in the text of the figure legend.
- Adjustments of brightness, contrast, or color balance are acceptable if they are applied to every pixel in the image and as long as they do not obscure, eliminate, or misrepresent any information present in the original, including the background. Non-linear adjustments (e.g., changes to gamma settings) must be disclosed in the figure legend
Good to have this important paper highlighted here. It’s very alarming that with just simple visual inspection, Bik and her colleagues were able to pick up so many image problems, and that these were missed by the reviewers, editors and journals. As well as seconding their call for greater/more effective scrutiny of images pre-publication, I think we need to greatly improve education on how to capture and handle images, how to prepare image data for reporting, and about all the issues associated with inappropriate manipulation. Primarily for researchers, but also to help ensure that third parties who are providing authors with all sorts of services, including manuscript/image preparation, know what they’re doing and are working to rigorous standards.
NB Bik et al. looked at just one type of inappropriate image manipulation – duplication – not the whole range of possible inappropriate manipulations. So the real level of problem images in the literature is likely considerably higher.
This all brings to mind the difference between having a policy and enforcing that policy:
https://scholarlykitchen.sspnet.org/2016/01/13/what-price-progress-the-costs-of-an-effective-data-publishing-policy/
Or as Jerry Seinfeld taught us, “…that’s really the most important part of the reservation: the holding. Anybody can just take them.” Anybody can have a policy. Holding authors to that policy is where things get difficult.
This is, of course, something journals and editorial offices can do, but it takes time and effort, and likely either software development or payment for products already available. At the same time, we’re under intense pressure to cut costs (and hold down subscription or APC prices) as well as to streamline the process (speed of publication is an increasing demand). These demands are at odds for demands for high levels of rigor and sweating the details. Are librarians/readers/authors willing to pay a little more to have articles screened in this manner or are the lax, unenforced policies we currently have “good enough”?
I agree that there are costs for using software and human screening prior to publication. On the other hand, the human costs of correcting or retracting the literature can be enormous. In the case of PLOS ONE, 348 papers will need to be investigated –or at least I hope they’ll be investigated, as PLOS has not responded– by someone other than the editor handling the original paper.
Just to clarify Phil’s point, PLOS ONE had a high total number of errors, but its percentage was not higher. Quoting from the actual research paper, “In PLOS ONE, from which the largest number of papers was screened, 4.3% of the papers were found to contain inappropriately duplicated images, whereas the percentage of papers with image duplication ranged from 0.3% (Journal of Cell Biology) to 12.4% (International Journal of Oncology) among the other journals, with a mean of 4.2%. Hence, even though PLOSONE was the journal that provided the largest set of papers evaluated in this study, it is not an outlier with regard to inappropriately duplicated images relative to the other journals examined.”
Sure, the rate of image manipulation in PLOS ONE may not be different from the mean rate (remember that PLOS ONE has a huge influence on that mean rate); however, the point I was making was that PLOS has a strong and detailed policy about manipulating blots and gels, which puts them in a position of taking necessary action to correct these 348 problem papers. If the rate of image manipulation is similar for other years not sampled in Bik’s study, PLOS may find itself with thousands of problem papers to investigate.
I’m perhaps in a position to find it more easily (I’m an editor at NPG), but the image policy that you couldn’t find for Nature Publishing Group is here: http://www.nature.com/authors/policies/image.html
Thanks Stephen. Currently, there is no mention of an image policy in the Author Instructions page (http://mts-nature.nature.com/cgi-bin/main.plex?form_type=display_auth_instructions), so it is likely that a submitting author would never see your policy, unless it is sent to the author sometime during or after the submission process.
Indeed, it’s clearly a bit hard to find – a point that I will pass on to the right people.