On October 4, 2013, Science Magazine published a piece of investigative journalism revealing the deceptive peer-review and business practices of 157 open access journal publishers (“Who’s Afraid of Peer Review?” Oct 4, 2013).
Not surprisingly, the investigation resulted in a maelstrom of criticism and commentary from pundits, publishers and industry groups. Has anything changed as a result of the exposé? I recently interviewed Science reporter, John Bohannon, on his views:
Q: It has been a month since Science Magazine published your investigation. Since your exposé, have any additional manuscripts been accepted or rejected?
On the day the story went to press at Science, another acceptance arrived by email. Another arrived just last week from a journal that I had classified as “dead” due to the lack of response to emails. Some editors haven’t caught wind of the sting.
A couple of “rejections” have also rolled in, but these were all from journals that had previously accepted the papers. One publisher (based in Thailand) even claimed that they knew the authors and data were fake and their acceptance of the paper was part of an elaborate counter-sting.
Q: What has been the response of editors and publishers? Have any journals ceased publication? Have any editors/editorial board members resigned in protest? Do any of them blame you, personally, for the outcomes? Have any threats (legal or otherwise) been made towards you or Science Magazine as a result of the exposé?
A couple of weeks before the story was published, I contacted editors and publishers specifically named in the story. Their responses are printed in the article, ranging from falling on their own sword and accepting responsibility to blaming the publishers and claiming they were not involved with the journal. But since the story was published, editors and publishers have largely fallen silent.
One exception is an editor based in the Middle East who says that the sting has cost him his job. It pains me to hear that. But then again, he clearly wasn’t doing his job.
As far as I can tell, it has been business as usual for the 157 publishers that got stung. I know of only one fully confirmed closure of a journal (as reported in Retraction Watch). There have been statements by publishers that they intend to close a journal, but I’ll believe it when I see it.
Of course, closing a single journal is nothing but a pinprick for many of the publishers that accepted the fake paper. Most publish dozens–some, hundreds–of titles.
I was bracing myself for lawsuits and PR blitzes from the publishers and editors that got stung. Ironically, the attacks came instead from advocates of the open access movement.
Q: Your study revealed widespread deception and fraud committed by publishers, their journals, and their editorial staff. Given your study population, this is not entirely surprising. However, a few prominent publishing houses made your list (for instance, Wolters Kluwer, Sage, and Elsevier). Are they addressing the situation any differently? Should they?
This is something I’ve heard from all quarters. “Given your study population, [the high rate of acceptance] is not entirely surprising.”
Why is that unsurprising? That is equivalent to saying, “The majority of publishers with open access journals that charge author fees are known to be fraudulent.” How was that known?
The 304 publishers that I probed in this investigation were not a small sample of the population of fee-charging open access publishers, but rather the complete set (as of October 2012, excluding non-English language publishers and the small proportion that have no biological, medical, or chemical journals). There seems to be widespread confusion about this point.
To reiterate my methods, I created a complete list of every publisher in the world with at least one fee-charging open access journal. Then for each one of those publishers, I selected the journal with the scope that best matched the fake paper’s topic–the anti-cancer properties of a chemical extracted from a lichen. Over 60% of this population failed the peer review test and accepted the paper.
You could argue that the high level of acceptance is unsurprising because nearly half of the publishers were listed on Beall’s List of predatory publishers. That would be persuasive if the vast majority of the acceptances were from Beall’s List alone, leaving the Directory of Open Access Journals (DOAJ)-listed publishers clean. But sadly, that is not what happened. Nearly half of all of the fee-charging open access publishers listed in the DOAJ failed this test.
So shouldn’t you feel as surprised and worried as I felt when I saw these results?
But you are not alone in expressing a lack of surprise. I can only assume that people are either far more cynical than I am about academic publishing, or they just haven’t thought through this all clearly yet.
Here is a typical example of the latter: Heather Joseph, the executive director of the Scholarly Publishing and Academic Resources Coalition, mistakenly claims that “the journals that the flawed articles were submitted to were not selected in an appropriately randomized way.” It seems that she (like many others, both critics and supporters alike) misunderstands that this was not a sample but rather a full survey of the entire population of fee-charging open access publishers. It could be that she is saying that the selection of journals from each publisher should have been randomized. But if so, that’s crazy. Submitting a cancer biology paper to journals devoted to psychology, particle physics, and other completely unrelated fields would be such a strong confounder that the whole experiment would be sunk. (I’m being charitable and assuming that she just made the first error.)
This meme [of generalizability] has traveled far and wide, and all the way to the top. It’s even baked into the official response from the industry’s ostensible leader, OASPA, which claims that “these journals were not selected in an appropriately randomized way.” Again, I want to be charitable and assume that people are simply making an error, rather than wantonly misrepresenting reality.
Q: There have been several themes in the responses to your exposé. Some critics have dismissed the entire study on the grounds that you did not conduct a proper controlled trial, while others blame Science for not putting your piece through proper peer review. Some publishing pundits have taken the opportunity attack the institution of peer review (Michael Eisen argues that “peer review is a joke”), or believe that your methods reveal deep bias (the DOAJ has accused you of being a racist). Why do you think this expose has spawned so many different interpretations and accusations? Was there a problem in how the study was framed? Conversely, do other controversial studies evoke similar responses?
I’ll take these one by one.
Dismissed it because the author is racist: When I read the DOAJ’s response, I stopped taking them seriously as a professional organization. For the sake of those who have not yet read my article, I clearly explained the reasons for randomly generating African authors and institutions. The entire investigation was motivated by the complaint of an African biologist who felt like she was getting scammed by a fee-charging open access publisher. To replicate her experience, I wanted to submit another biology paper with (fabricated) African authors. Doing so also allowed me to scale up the sting to hundreds of fake paper submissions. As a science journalist who has reported extensively from Africa, I knew that it is not at all unusual for Africa-based scientists and institutions to have little or no Google footprint. For US or European scientists and institutions, it is immediately suspicious when a Google search turns up no hits. The whole investigation would be a non-starter without this method. The DOAJ statement is the worst kind of mud-slinging. It doesn’t make me angry, because it is so incoherent that it’s not even threatening. But it makes me deeply embarrassed for them.
Dismissed it because it wasn’t a “controlled” study: At best, this is another innocent misunderstanding of the basic facts of my investigation, and at worst it is deliberately pseudo-scientific rhetoric. But I can understand why this meme has spread. It has all the right ingredients: “This study published in Science used a research paper that lacked proper controls to sting journals, but the Science study itself lacked controls. How ironic!” Indeed, it would be ironic if it were true. The question that my investigation asked was this: “Among fee-charging open access publishers (the kind that was scamming the Nigerian biologist who contacted me and motivated the investigation in the first place), how many are living up to their promise of carrying out rigorous peer-review?” Answering that question does not require a control. If I wanted to know what proportion of zebras were infected by a disease, would I have to use horses as a control group? The question that my investigation asked and answered just doesn’t seem to interest some critics. They are more interested in the question, “Which publishing business model is better, open access or subscription?” That’s a legitimate question. It just doesn’t happen to be one I’m interested in, and it’s not the question I pursued.
Dismissed it because it wasn’t peer-reviewed: Some have rushed to my defense by pointing out that this was a piece of investigative journalism published by the news department at Science, with absolutely no input from the editorial staff. That’s true, but many people may not realize that journalism does have its own version of peer review. Your article first gets critiqued by one editor, and only after you’ve satisfied all of his/her questions and revisions does it go on to a “top editor” who independently reviews it. Many journalists take it a step farther and show their draft article to outside experts. (I did that.) In any case, there can be no more rigorous peer review than the one that my investigation has been receiving in the weeks since publication. I am proud that not a single fault has been identified in the basic methodology, results, or conclusion. People have complained that I didn’t do a completely different experiment, but no one has doubted that 157 out of 304 fee-charging open access publishers failed a robust test of peer review / editorial quality control.
Dismissed it because this has nothing to do with open access; peer review itself is broken: This contortionist argument was first articulated by PLoS cofounder Michael Eisen in his blog post, “I confess, I wrote the Arsenic DNA paper to expose flaws in peer-review at subscription based journals.” He writes that peer review has failed before, even in prestigious journals like Science, so don’t point your finger at open access journals–and how dare Science of all journals do so! But I struggle to see how this makes the results any less troubling. He implies that we should expect that 60% of all journals, subscription-based included, would fail this peer review test. That could be the case, but wouldn’t that make these results even more disastrous and worrying for the scientific community?
Dismissed it because Science is biased: Is Science biased against the open access movement? I can’t answer this question. Until I dove into this investigation, I hardly paid any attention to the question of business models in academic publishing. Perhaps it would help if people knew that I actually started this investigation entirely on my own initiative. My previous editor was too nervous about being sued by publishers. But just months before the investigation was complete, a new editor took the helm of the news department at Science and was passionate about publishing it. We carefully went over everything with a lawyer, resulting in, for example, a huge amount of time spent redacting bank account numbers from the acceptance letters and invoices in the emails that were made public. So any bias that Science may or may not have against the open access movement had nothing to do with my motivation or the methods of my investigation, and it certainly had nothing to do with the results.
Dismissed it because the journals that accepted the fake paper don’t matter: You didn’t mention this one, but I’m adding it because it’s the most legitimate critique I’ve heard. And again, this was first articulated by Eisen in a discussion hosted by Peter Suber. I encourage everyone to read it from start to finish. You will not be disappointed. You get to see an accusation by Eisen, with even more cunning political cynicism than the DOAJ mud-slinging, that I am a partisan hack. You get a taste of the surprisingly nasty attacks that have come my way, e.g. from Björn Brembs. You get to see one of the publishers, Gunther Eysenbach, claim that he found “problems” with my data–but you have to read his blog to find out that those “problems” are actually extra submissions that had absolutely no effect on the results of the investigation and were excluded because they weren’t relevant. And most amazingly, you get to see David Solomon insinuate that my investigation violated codes of human subjects research. The discussion is an absolute circus. It may not show the academic publishing community at its most dignified, but it certainly shows them at their most entertaining.
It could very well be that the 60% acceptance rate across fee-charging open access publishers is less worrying than it seems. If the data were weighted by the number of articles published per year by each journal, for example, the rate might grow smaller. That is a completely legitimate question, and someone should do that analysis. Of course, that will require some work. (Eisen just states it as if it’s a self-evident truth–a claim he made during this video discussion.) But at this point, it is just an unsubstantiated hunch. Eisen believes (or hopes) it is true because “the big 3” (PLoS, BioMed Central, and Hindawi) rejected the fake paper. But he never mentions that Elsevier, Wolters Kluwer, and Sage accepted it. Don’t those publishers matter? Without doing the actual work of analyzing the data, should we dismiss these results on the basis of a hunch?
Q: As a result of your investigation, the DOAJ has removed 114 OA journals from its list and claims that it will be “revising and tightening” its criteria for entry into the list. The Open Access Scholarly Publishers Association (OASPA) recently issued its own investigation, terminating the memberships of two publishers and putting one on probation. Do you believe either of these organizations is addressing the issue adequately?
I do not take the DOAJ seriously and have no expectation of professional behavior on their part (see “racist” accusation above). It could be that it’s just a few bad apples over there, but I’m still so shocked by their conduct.
As for OASPA, I’ve been impressed. They conducted an investigation (led by eLife executive director Mark Patterson), and just announced the results (11 November). Two publishers who accepted the fake paper–Dove Press and Hikari–have been kicked out of OASPA. As for SAGE, one of the most prestigious publishers to accept the paper, they only got a slap on the wrist: They will be listed as “under review” for 6 months. Perhaps this is a “too big to fail” moment for academic publishing.
From the very beginning of my investigation, I was in phone and email contact with the president of OASPA, Paul Peters (who is also the chief strategy officer of Hindawi). In the beginning, he was totally unaware of my sting operation. I just interviewed him, as myself (reporter, John Bohannon), about Hindawi. He was completely transparent and helpful, far more than any other open access publisher whom I interviewed. The fact that OASPA is led by Paul Peters, a professional to the core, lends them great credibility. Hopefully they will lead the effort to root out fraudsters from their industry.
Q: Many academic libraries have created open access funds for their authors. The Harvard HOPE Fund, which serves as the basis for many other library funds, stipulates that funding for article processing fees be limited to journals listed in the DOAJ and be a member of OASPA. Your study reveals that even these criteria are not sufficient. Are additional criteria for funding necessary?
Libraries now have one more criterion they can apply: If a journal or its publisher is marked as “accepted” on this page, then beware!
Q: In order to investigate the peer review and business practices of these journals, it was necessary for you to deceive the editors of your identity and intentions. Do you believe that such deception was warranted? As a journalist, do you work by a different ethical code than if you were working as a scientist?
Yes, we have a slightly different set of professional guidelines, though there is a great deal of ethical overlap with human subjects research. I submitted fake, fatally flawed scientific papers to research journals for the purpose of probing their claim of providing rigorous peer review. That falls squarely within the domain of investigative journalism.
This question has also been addressed by a professional bioethicist who studies this issue and he agrees that my sting operation violated neither the professional codes of journalism, nor of human subjects research. Here is how Harvard bioethicist Dan Wikler explains it in the discussion hosted by Peter Suber: “Superficially, the Common Rule might seem to apply to Bohannon’s study, but it’s equally true that it superficially might seem to apply to a vast range of journalistic (and also academic and even literary) investigations that have – rightly – never been considered morally suspect… I don’t think Bohannon’s investigation needs any serious defense either on human subjects grounds.”
Q Is there anything else you learned from this study?
I learned that I have been too naive and idealistic about scientists. I assumed that the results [of my study] would speak for themselves. There would be disagreements about how best to interpret them, and what to do about them, but it would be a civil discussion and then a concerted, rational, community effort to address the problems that the results reveal. But that is far from what happened. Instead, it was 100% political and many scientists that I respected turned out to be the most cynical political operators of all. I got into the business of science journalism (after a PhD in molecular biology) because I had complete certainty in two things: that I love journalism (the investigation, following the facts, speaking truth to power), and that I love science. This experience has shaken my certainty a bit. On the bright side, I learned a huge amount of computer coding!