In 2018, I talked with Melinda Baldwin (Associate Professor of History at the University of Maryland) about a fascinating article she authored entitled “Scientific Autonomy, Public Accountability, and the Rise of “Peer Review” in the Cold War United States” (Isis, volume 109, number 3, September 2018).

In talking again with Melinda this month, we initiated a discussion of whether peer review has a role to play in uncovering scientific fraud. Perhaps in the first blush, one may suggest that peer review is entirely appropriate as a fraud detector, but Melinda suggests this is not the role of peer review

I ask Melinda here to take us through her perspective on the role of peer review in context for scientific fraud from her perspective as a historian of science – a perspective that I hope you agree sheds a clear light on the benefits and limitations of peer review in the scientific endeavor.

3d rendering of chess figures, one leaning over the other

What is the role of peer review?

I’m going to give an annoying non-answer: it depends on who you ask! Journal editors may have different answers from scientists, who have different answers from members of the public interested in scientific findings.

One thing that almost everyone agrees on, however, is that peer review is supposed to be a gatekeeper in the world of scientific publishing. The process of peer review is supposed to detect bad science and prevent it from getting into print. You see this vision of peer review in some of the recent coverage of Elizabeth Holmes’s trial in California. Nature ran a news article suggesting that if Holmes’s company, Theranos, had been expected to publish more peer-reviewed papers, their fraudulent practices would have been detected far earlier.

But, as Scholarly Kitchen readers probably know, being peer-reviewed doesn’t ensure that a scientific paper got it right. There are a lot of cases where papers do make it past peer review and are later found to be deeply flawed — or, worse, are found to be based on deliberately fabricated data.

Is there a role for peer review in uncovering fraud?

I certainly think it’s possible for fraud to be uncovered during the peer review process. A careful referee may notice inconsistencies in the methods or data that suggest the authors aren’t being honest about their results.

But as a general rule I think peer review is not set up to detect fraud, and we shouldn’t be surprised when fabricated data makes it through a peer review process. Most scientists approach the peer review process with the basic assumption that the authors are being truthful when they describe their setup and results. If a referee starts with that collegial assumption of honesty, they’re probably not going to go through a paper’s data with a fine-tooth comb looking for clues that it might be falsified. One notable recent case of a peer-reviewed paper that faced fraud accusations was the “When Contact Changes Minds” paper in Science by political scientists Michael Lacour and Donald Green. The paper passed Science’s peer review process—but it wasn’t until another political science graduate student tried to replicate Lacour’s survey methods that anyone realized something wasn’t right.

Furthermore, a referee who openly accused authors of fraud or misrepresentation would be getting themselves into a potentially thorny situation. Do they include that accusation in a referee report? Have a quiet word with the editor about their suspicions? Recommend rejection and say they’re not convinced by the conclusions, but without saying outright that they think the work is being misrepresented? Even though referees’ names are usually not shared with the author, the journal’s editors will certainly know who wrote the report, and I think many scientists — especially junior ones — might worry that they will come across as petty or malicious if they speak up about their suspicions. The Committee on Publication Ethics (COPE) offers some suggestions for how referees and editors might approach potential cases of fraud.

From your position as a historian of peer review, are there examples of academic fraud and how they were uncovered?

Generally speaking, when fraud is uncovered, it’s uncovered not by the peer review process, but after publication. The Lacour and Green paper is one example. In the case of Theranos, a paper in a journal called Hematology Reports attracted some post-publication scrutiny that contributed to the company’s collapse. A pathologist named Adam Clapper looked up the Hematology Reports article and wrote a blog entry saying that he didn’t buy the company’s claims about its technology. After looking into Theranos more, Clapper became so suspicious that he contacted Wall Street Journal reporter John Carreyrou, whose reporting ultimately brought down the company — see chapter 18 of Carreyrou’s book Bad Blood for more on that.

There are also a host of examples that occupy a grey area — they’re arguably not quite fraud, because the scientists behind these papers seem to genuinely believe what they’re claiming, but they’re studies with serious problems that end up not being replicable. With those papers, too, we see that when the problems come to light, it’s almost always because of post-publication scrutiny. One historical example is the French physicist René Blondlot, who claimed to have discovered something he called an “N-ray” in the early 20th century. When researchers were unable to replicate his published results, an American physicist named Robert Wood traveled to Blondlot’s laboratory to see Blondlot’s procedures. Wood found that if he removed what was supposed to be a key piece of Blondlot’s apparatus — a crystal that was meant to refract n-rays onto the detector — Blondlot reported no change in his results. So that’s an example of a deeply flawed study whose flaws were uncovered by post-publication investigation.

A more recent example, and quite a famous one, is that of cold fusion. University of Utah chemists Stanley Pons and Martin Fleischmann claimed they could generate room-temperature nuclear fusion using a palladium cathode, and secured publication for their work in the Journal of Electroanalytical Chemistry. But when other laboratories were not able to replicate the results they described, the findings were largely discredited.

Are there example of peer review entangling with issues of fraud, successfully, or unsuccessfully?

There have been several examples of fraud in the peer review process itself! In 2012, there was a famous case of a researcher who entered false email addresses for suggested reviewers. The researcher then accepted the invitations to review the paper and submitted favorable referee reports for his own work. It happened again in 2017 at the publisher Frontiers, and in 2019 with nine papers at Elsevier journals. More publishers and editors are now taking steps to verify reviewer identity, but I think it shows that the peer review process itself can unfortunately be vulnerable to bad actors.

But I can’t think of any documented examples of fraud being uncovered during the peer review process. It’s possible — likely, even — that such a detection has happened, but that it’s been handled confidentially and not publicized. I know in history, there are a few anecdotes that make the rounds during conferences about plagiarized papers being withdrawn after a referee points out the plagiarism. I’d be very curious to do an anonymous survey of scientific journal editors about their experiences with any academic misconduct they’ve uncovered prior to publication!

What potential solutions do you recommend for uncovering academic fraud?

I think one key component of any solution would have to be protection for whistleblowers. David Broockman, the graduate student whose analysis revealed the irregularities in the Lacour and Green paper, has said that most of his mentors and colleagues discouraged him from looking so closely at the data behind “When Contact Changes Minds.” They were afraid this young scholar was going to build a reputation as a troublemaker and have difficulty getting a job. If it’s likely that raising questions about a paper is going to damage the questioner’s career, we should expect that most people will try to ignore their suspicions and move on — especially given how competitive the academic job market is these days.

But too much support for self-described whistleblowers could also lead down some dark paths. For example, in the 1980s, there was a very thorny episode involving accusations of research misconduct against a biologist named Thereza Imanishi-Kari. Margot O’Toole, a postdoc in Imanishi-Kari’s lab, was not able to replicate the results from a paper her supervisor had published. O’Toole eventually accused Imanishi-Kari of misrepresenting her data. Because that study had received NIH funding, Congress got involved, the Secret Service went through Imanishi-Kari’s laboratory notebooks, one of Imanishi-Kari’s coauthors resigned a position as a University president — it’s one of the most dramatic cases of fraud accusations in modern science.

Ultimately, Imanishi-Kari was exonerated of deliberate wrongdoing, but not before a lot of collateral damage had occurred. And one of the things that I think makes this such a fascinating and troubling example is that both Imanishi-Kari and O’Toole were arguably acting in good faith — they both genuinely believed they were in the right. I think a lot of investigations into suspected fraud may end up in a similar murky area, where perhaps a study wasn’t done as rigorously as it could have been, but it’s not clear that there was a genuine intent to deceive. So I think one important step will be to think about where an experimenter crosses the line from being merely sloppy into actively committing academic fraud.

Robert Harington

Robert Harington

Robert Harington is Chief Publishing Officer at the American Mathematical Society (AMS). Robert has the overall responsibility for publishing at the AMS, including books, journals and electronic products.

Discussion

15 Thoughts on "Fraud and Peer Review: An Interview with Melinda Baldwin"

As someone working in research integrity, I can say with some certainty that peer review does detect fraud… not all of course! But some examples would be reviewers spotting manipulated or duplicated images, as well as noticing irrelevant (or even fictional) references – a common symptom of papermills. Whistleblowers should always feel safe to speak up when looking at COPE member journals, as they should be assured of anonymity.

Professor Baldwin’s valuable Isis article brought timely attention to the importance of peer-review in fraud detection. However, clarification seems necessary concerning post-publication peer review. It would be helpful if Professor Baldwin had distinguished between formal pre-publication review by accredited peer reviewers selected by editors, and less formal post-publication review by self-appointed reviewers, who may have the same (or higher) qualifications as those selected by editors. According to Professor Baldwin, these are more likely to detect fraud, so we may assume that peer review works.

Post-publication peer review by accredited self-appointed peer reviewers was the basis of the now-defunct PubMed Commons “experiment.” Post publication review by not necessarily accredited peer reviewers is now provided by PubPeer (the “online journal club”) and For Better Science has recorded some if its successes at fraud detection. Here the detection may have been by reviewers that editors might sometimes have deemed accredited.

Given the above, perhaps Professor Baldwin would clarify the following:

“I can’t think of any documented examples of fraud being uncovered during the peer review process.”

“when the problems come to light, it’s almost always because of post-publication scrutiny.”

“Generally speaking, when fraud is uncovered, it’s uncovered not by the peer review process, but after publication.”

As I read it, it’s clear that what was meant by ‘peer review’ in this interview, was ‘formally organised pre-publication review’, since, as you say, it’s contrasted with the point that most problems come to light because of post-publication scrutiny. (Which has always been the case; indeed, was the case even before the Internet was invented, let alone specific websites which have improved the process of publicly critiquing papers).

But yes, they could have been clearer about that.

Thank you for the comment Dr. Forsdyke! When I use the term “peer review” here, I am specifically referring to pre-publication review by editor-selected referees.

Could it be that the reason for you not being able to think of “any documented examples of fraud being uncovered during the peer review process” is lying in the fact that fraud detected by peer-review does not get publicly documented? I mean what you see in the published papers that turns out to be fraud is just the tip of the iceberg, the larger part having been previously stopped by the peer-review process and those exchanges are kept in the drawes of the Editors.

Publishers have an important role to play in facilitating the exposure of the sins of omission and commission on the part of authors. While peer review should not bear the full weight of this responsibility, there is a great deal more publishers can contribute to enabling peers (both during the review process and thereafter) to access all relevant data and methods for others to evaluate and attempt to reproduce prior claims and assertions. Collaborate with platform vendors and repositories to associate these data with articles, including post-publication; Create social spaces for commentary (monitored most likely); Promote best-practices and acknowledge those making positive contributions… Ultimately, sussing out fraud from human error is a community responsibility which, when provided robust information and tools, is well-positioned to improve the quality of scientific communication.

Ultimately, sussing out fraud from human error [and sampling issues, stochastic processes, coding issues, changes in population, social history and the many other processes that make cause a “failure” to replicate” ] is a community responsibility

Talking to some researchers at an academic conference earlier today I was surprised how prevalent ghost authors appear to be in their field. If their Editor-in-Chief wasn’t keeping an eagle eye out for it, and rejecting papers using ghost authors, there would be significant issues in their journals.

The post is about peer review as a barrier to fraud, but this is a blog about scholarly publishing. I’m with Melinda Baldwin that peer review can seldom stop careful, deliberate fraud. and it. But it seems to me the standard actions by publishers faced with questions of fraudulent published articles is to sit on their hands. Take the 3 ways most fraud comes to light: 1) image manipulations published in plain view; 2) journals required publication of raw data and others attempting to reproduce results find impossible or highly improbably results, and 3) inside information.

For #1, someone with a keen eye for patterns picks up on images that have been clearly manipulated – cloned, pasted, resized, flipped, appeared in a different paper representing different data. In other words, slam dunk prima facie evidence of fraud, where motive, whodidit, etc. are immaterial. Typical journal responses are to wait for an institutional investigation, which if conducted will have opaque findings released later, and tend to report “nothing to see here.” Plenty of examples at Retraction Watch or Elisabeth Bik’s blog https://scienceintegritydigest.com/.

For #2, the data don’t add up, those come to light at journals that have strong data archiving and sharing policies (most don’t). The Lacour fraud came out through someone else going through the data and realizing that they had to have been dry labbed. The most impressive example I’ve ever seen of journal editors stepping up was with the Jonathon Pruitt affair. There spider man got tangled in his own web when others were doing secondary data analyses, found suspicious numbers and asked questions of the journal editors. A journal editor organized post publication review, enlisting an uninvolved statistician and others (http://ecoevoevoeco.blogspot.com/2021/05/). Other journal editors followed suit. Papers were retracted. The university investigation has yet to be released, if it ever will be.

#3 Inside information only comes out with whistleblowers, and these are fraught. Both whistleblower and the accused are vulnerable to abuse, and in the United States there is sometimes financial incentive for whistleblowing. Unlike #1 and #2, journals would be advised to sit tight until these cases are adjudicated.

Unfortunately, there seem to be way far too many examples of publishers sitting on their hands when instances of #1 and #2 are convincingly demonstrated.

On the role of peer review, it is indeed a common notion that: One thing that almost everyone agrees on, however, is that peer review is supposed to be a gatekeeper in the world of scientific publishing. The process of peer review is supposed to detect bad science and prevent it from getting into print. The catch is that there’s not a clear boundary of “bad science” and we too often allow this be a stand in for “the conclusions do not hold up.” Well, there’s lots of good science papers that have made significant contributions where the particular conclusions don’t hold over time or as more data are collected (in some cases spurred by those papers). I prefer something like:

…the primary scientific goal of scholarly publishing is advancing science and our understanding of nature. Publications serve both to present tangible new results or ideas and to form a foundation for additional work. Hypotheses, speculation, inference, and tentative and even incorrect conclusions all play important roles in the process of science, as do comments and criticisms of prior results. Guided by peer review, editors of journals make publication decisions based on the level of contribution a particular article makes to these larger goals. This level is partly guided by community standards for referencing, data collection, inference, open data and software, etc., for which peer review (and journal standards) help. This process does not ensure that all claims are or will be irrefutable. Certain levels of confidence may be desirable but are not absolute. Ethics and integrity foster this primary scientific goal and provide authority for applying the integrated understanding that results. Many attacks on integrity in publishing derive from incorrect assumptions regarding this goal and the support for it provided by the peer-review process. Indeed, the occasional poorly reviewed or fraudulent paper is increasingly less important to the overall integrity of publishing. A greater concern is the ethical responsibilities that support the institution of scholarly publishing. These need much more attention than they are receiving.

I have seen peer review catch fraud a few times. it has happened when reviewers raise questions that caused co-authors to dig more deeply into the data/contributions of another co-author. These have not been made public.

“Perhaps in the first blush, one may suggest that peer review is entirely appropriate as a fraud detector, but Melinda suggests this is not the role of peer review.”

Does she? I don’t see a case being made here that peer review *should* not be the primary point of fraud detection, only a reiteration of the fact that it hasn’t served that function very well so far, which we all already know. Since post-publication review has been the main mechanism for fraud detection, it seems obvious that pre-publication peer review could also be doing that. It’s all just scientists looking at papers and speaking up.

Instead, I’m seeing an article describing ways our very informal system of peer review needs to change in order to facilitate this role. Clear guidelines need to be established for what peer reviewers should examine in the course of review. Peer review teams need to include people who have appropriate expertise to evaluate things like the statistical methodology in use. Data sharing needs to be mandatory. Peer reviewers need training on common signals of fraud or error. Clear procedures need to be established for how evidence uncovered this way must be reported, and who will be notified. None of this is rocket science.

The biggest barrier, instead, is that this is an intensive ask for volunteer work that has essentially no rules or requirements at the moment. Reviewers can be, and often are, slipshod and capricious in their comments because it’s already extremely difficult to get scientists to volunteer for this completely unrecognized and unrewarded work. And the real solution is that peer review needs to be paid and acknowledged work. Make it a contract, and you can put real expectations in place for performance.

Re: N-rays.
I came across this article: https://doi.org/10.1177%2F030631293023001003 The author observes that it is dubious to celebrate Robert Wood as a hero of science, as he did not engage in any sort of “good scientific practice”, relying instead on tricks and self-confidence.
Being an experimentalist myself, I do feel for Blondlot. I can’t imagine how my carefully crafted and unique experimental setup would behave if somebody tampered with it *behind my back*.
I see that the paper I link has above 100 citations; still, if you try Googling N-rays, the pervasive version is Blondlot-idiot, Wood-hero. Thus, I thought it is useful to mention it here. I definitely object to choosing Wood as the patron saint of modern whistleblowers. Modern whistleblowers deserve a better patron than him.
This is of course setting aside the question if N-rays exist or not, I’m afraid that we will never know.

As an editor I have also seen peer review successfully identify fraud, and have had to persuade peer reviewers to provide non-accusatory details of what seems wrong with the work so that it can be pursued. I wonder if Dr Baldwin and others are more familiar with fraud detected post-publication simply because it happens in public.

I twice had junior scientists from a foreign country include a (different) reputable American scientist as a coauthor on a submitted manuscript. I suppose someone had advised them a big-name scientist would get an easy ride through review. What they didn’t realize was, our submittal system automatically sends a note to every author thanking them for their submittal. In both cases, the American author, who had never met the foreign author and knew nothing about the manuscript, immediately wrote me a note asking what the heck was going on!
I summarily rejected the papers, but I never did get to the bottom of this behavior. I guess the lesson here is that fraud can take a lot of forms.

Comments are closed.