On October 4, 2013, Science Magazine published a piece of investigative journalism revealing the deceptive peer-review and business practices of 157 open access journal publishers (“Who’s Afraid of Peer Review?” Oct 4, 2013).
Not surprisingly, the investigation resulted in a maelstrom of criticism and commentary from pundits, publishers and industry groups. Has anything changed as a result of the exposé? I recently interviewed Science reporter, John Bohannon, on his views:
Q: It has been a month since Science Magazine published your investigation. Since your exposé, have any additional manuscripts been accepted or rejected?
On the day the story went to press at Science, another acceptance arrived by email. Another arrived just last week from a journal that I had classified as “dead” due to the lack of response to emails. Some editors haven’t caught wind of the sting.
A couple of “rejections” have also rolled in, but these were all from journals that had previously accepted the papers. One publisher (based in Thailand) even claimed that they knew the authors and data were fake and their acceptance of the paper was part of an elaborate counter-sting.
Q: What has been the response of editors and publishers? Have any journals ceased publication? Have any editors/editorial board members resigned in protest? Do any of them blame you, personally, for the outcomes? Have any threats (legal or otherwise) been made towards you or Science Magazine as a result of the exposé?
A couple of weeks before the story was published, I contacted editors and publishers specifically named in the story. Their responses are printed in the article, ranging from falling on their own sword and accepting responsibility to blaming the publishers and claiming they were not involved with the journal. But since the story was published, editors and publishers have largely fallen silent.
One exception is an editor based in the Middle East who says that the sting has cost him his job. It pains me to hear that. But then again, he clearly wasn’t doing his job.
As far as I can tell, it has been business as usual for the 157 publishers that got stung. I know of only one fully confirmed closure of a journal (as reported in Retraction Watch). There have been statements by publishers that they intend to close a journal, but I’ll believe it when I see it.
Of course, closing a single journal is nothing but a pinprick for many of the publishers that accepted the fake paper. Most publish dozens–some, hundreds–of titles.
I was bracing myself for lawsuits and PR blitzes from the publishers and editors that got stung. Ironically, the attacks came instead from advocates of the open access movement.
Q: Your study revealed widespread deception and fraud committed by publishers, their journals, and their editorial staff. Given your study population, this is not entirely surprising. However, a few prominent publishing houses made your list (for instance, Wolters Kluwer, Sage, and Elsevier). Are they addressing the situation any differently? Should they?
This is something I’ve heard from all quarters. “Given your study population, [the high rate of acceptance] is not entirely surprising.”
Why is that unsurprising? That is equivalent to saying, “The majority of publishers with open access journals that charge author fees are known to be fraudulent.” How was that known?
The 304 publishers that I probed in this investigation were not a small sample of the population of fee-charging open access publishers, but rather the complete set (as of October 2012, excluding non-English language publishers and the small proportion that have no biological, medical, or chemical journals). There seems to be widespread confusion about this point.
To reiterate my methods, I created a complete list of every publisher in the world with at least one fee-charging open access journal. Then for each one of those publishers, I selected the journal with the scope that best matched the fake paper’s topic–the anti-cancer properties of a chemical extracted from a lichen. Over 60% of this population failed the peer review test and accepted the paper.
You could argue that the high level of acceptance is unsurprising because nearly half of the publishers were listed on Beall’s List of predatory publishers. That would be persuasive if the vast majority of the acceptances were from Beall’s List alone, leaving the Directory of Open Access Journals (DOAJ)-listed publishers clean. But sadly, that is not what happened. Nearly half of all of the fee-charging open access publishers listed in the DOAJ failed this test.
So shouldn’t you feel as surprised and worried as I felt when I saw these results?
But you are not alone in expressing a lack of surprise. I can only assume that people are either far more cynical than I am about academic publishing, or they just haven’t thought through this all clearly yet.
Here is a typical example of the latter: Heather Joseph, the executive director of the Scholarly Publishing and Academic Resources Coalition, mistakenly claims that “the journals that the flawed articles were submitted to were not selected in an appropriately randomized way.” It seems that she (like many others, both critics and supporters alike) misunderstands that this was not a sample but rather a full survey of the entire population of fee-charging open access publishers. It could be that she is saying that the selection of journals from each publisher should have been randomized. But if so, that’s crazy. Submitting a cancer biology paper to journals devoted to psychology, particle physics, and other completely unrelated fields would be such a strong confounder that the whole experiment would be sunk. (I’m being charitable and assuming that she just made the first error.)
This meme [of generalizability] has traveled far and wide, and all the way to the top. It’s even baked into the official response from the industry’s ostensible leader, OASPA, which claims that “these journals were not selected in an appropriately randomized way.” Again, I want to be charitable and assume that people are simply making an error, rather than wantonly misrepresenting reality.
Q: There have been several themes in the responses to your exposé. Some critics have dismissed the entire study on the grounds that you did not conduct a proper controlled trial, while others blame Science for not putting your piece through proper peer review. Some publishing pundits have taken the opportunity attack the institution of peer review (Michael Eisen argues that “peer review is a joke”), or believe that your methods reveal deep bias (the DOAJ has accused you of being a racist). Why do you think this expose has spawned so many different interpretations and accusations? Was there a problem in how the study was framed? Conversely, do other controversial studies evoke similar responses?
I’ll take these one by one.
Dismissed it because the author is racist: When I read the DOAJ’s response, I stopped taking them seriously as a professional organization. For the sake of those who have not yet read my article, I clearly explained the reasons for randomly generating African authors and institutions. The entire investigation was motivated by the complaint of an African biologist who felt like she was getting scammed by a fee-charging open access publisher. To replicate her experience, I wanted to submit another biology paper with (fabricated) African authors. Doing so also allowed me to scale up the sting to hundreds of fake paper submissions. As a science journalist who has reported extensively from Africa, I knew that it is not at all unusual for Africa-based scientists and institutions to have little or no Google footprint. For US or European scientists and institutions, it is immediately suspicious when a Google search turns up no hits. The whole investigation would be a non-starter without this method. The DOAJ statement is the worst kind of mud-slinging. It doesn’t make me angry, because it is so incoherent that it’s not even threatening. But it makes me deeply embarrassed for them.
Dismissed it because it wasn’t a “controlled” study: At best, this is another innocent misunderstanding of the basic facts of my investigation, and at worst it is deliberately pseudo-scientific rhetoric. But I can understand why this meme has spread. It has all the right ingredients: “This study published in Science used a research paper that lacked proper controls to sting journals, but the Science study itself lacked controls. How ironic!” Indeed, it would be ironic if it were true. The question that my investigation asked was this: “Among fee-charging open access publishers (the kind that was scamming the Nigerian biologist who contacted me and motivated the investigation in the first place), how many are living up to their promise of carrying out rigorous peer-review?” Answering that question does not require a control. If I wanted to know what proportion of zebras were infected by a disease, would I have to use horses as a control group? The question that my investigation asked and answered just doesn’t seem to interest some critics. They are more interested in the question, “Which publishing business model is better, open access or subscription?” That’s a legitimate question. It just doesn’t happen to be one I’m interested in, and it’s not the question I pursued.
Dismissed it because it wasn’t peer-reviewed: Some have rushed to my defense by pointing out that this was a piece of investigative journalism published by the news department at Science, with absolutely no input from the editorial staff. That’s true, but many people may not realize that journalism does have its own version of peer review. Your article first gets critiqued by one editor, and only after you’ve satisfied all of his/her questions and revisions does it go on to a “top editor” who independently reviews it. Many journalists take it a step farther and show their draft article to outside experts. (I did that.) In any case, there can be no more rigorous peer review than the one that my investigation has been receiving in the weeks since publication. I am proud that not a single fault has been identified in the basic methodology, results, or conclusion. People have complained that I didn’t do a completely different experiment, but no one has doubted that 157 out of 304 fee-charging open access publishers failed a robust test of peer review / editorial quality control.
Dismissed it because this has nothing to do with open access; peer review itself is broken: This contortionist argument was first articulated by PLoS cofounder Michael Eisen in his blog post, “I confess, I wrote the Arsenic DNA paper to expose flaws in peer-review at subscription based journals.” He writes that peer review has failed before, even in prestigious journals like Science, so don’t point your finger at open access journals–and how dare Science of all journals do so! But I struggle to see how this makes the results any less troubling. He implies that we should expect that 60% of all journals, subscription-based included, would fail this peer review test. That could be the case, but wouldn’t that make these results even more disastrous and worrying for the scientific community?
Dismissed it because Science is biased: Is Science biased against the open access movement? I can’t answer this question. Until I dove into this investigation, I hardly paid any attention to the question of business models in academic publishing. Perhaps it would help if people knew that I actually started this investigation entirely on my own initiative. My previous editor was too nervous about being sued by publishers. But just months before the investigation was complete, a new editor took the helm of the news department at Science and was passionate about publishing it. We carefully went over everything with a lawyer, resulting in, for example, a huge amount of time spent redacting bank account numbers from the acceptance letters and invoices in the emails that were made public. So any bias that Science may or may not have against the open access movement had nothing to do with my motivation or the methods of my investigation, and it certainly had nothing to do with the results.
Dismissed it because the journals that accepted the fake paper don’t matter: You didn’t mention this one, but I’m adding it because it’s the most legitimate critique I’ve heard. And again, this was first articulated by Eisen in a discussion hosted by Peter Suber. I encourage everyone to read it from start to finish. You will not be disappointed. You get to see an accusation by Eisen, with even more cunning political cynicism than the DOAJ mud-slinging, that I am a partisan hack. You get a taste of the surprisingly nasty attacks that have come my way, e.g. from Björn Brembs. You get to see one of the publishers, Gunther Eysenbach, claim that he found “problems” with my data–but you have to read his blog to find out that those “problems” are actually extra submissions that had absolutely no effect on the results of the investigation and were excluded because they weren’t relevant. And most amazingly, you get to see David Solomon insinuate that my investigation violated codes of human subjects research. The discussion is an absolute circus. It may not show the academic publishing community at its most dignified, but it certainly shows them at their most entertaining.
It could very well be that the 60% acceptance rate across fee-charging open access publishers is less worrying than it seems. If the data were weighted by the number of articles published per year by each journal, for example, the rate might grow smaller. That is a completely legitimate question, and someone should do that analysis. Of course, that will require some work. (Eisen just states it as if it’s a self-evident truth–a claim he made during this video discussion.) But at this point, it is just an unsubstantiated hunch. Eisen believes (or hopes) it is true because “the big 3” (PLoS, BioMed Central, and Hindawi) rejected the fake paper. But he never mentions that Elsevier, Wolters Kluwer, and Sage accepted it. Don’t those publishers matter? Without doing the actual work of analyzing the data, should we dismiss these results on the basis of a hunch?
Q: As a result of your investigation, the DOAJ has removed 114 OA journals from its list and claims that it will be “revising and tightening” its criteria for entry into the list. The Open Access Scholarly Publishers Association (OASPA) recently issued its own investigation, terminating the memberships of two publishers and putting one on probation. Do you believe either of these organizations is addressing the issue adequately?
I do not take the DOAJ seriously and have no expectation of professional behavior on their part (see “racist” accusation above). It could be that it’s just a few bad apples over there, but I’m still so shocked by their conduct.
As for OASPA, I’ve been impressed. They conducted an investigation (led by eLife executive director Mark Patterson), and just announced the results (11 November). Two publishers who accepted the fake paper–Dove Press and Hikari–have been kicked out of OASPA. As for SAGE, one of the most prestigious publishers to accept the paper, they only got a slap on the wrist: They will be listed as “under review” for 6 months. Perhaps this is a “too big to fail” moment for academic publishing.
From the very beginning of my investigation, I was in phone and email contact with the president of OASPA, Paul Peters (who is also the chief strategy officer of Hindawi). In the beginning, he was totally unaware of my sting operation. I just interviewed him, as myself (reporter, John Bohannon), about Hindawi. He was completely transparent and helpful, far more than any other open access publisher whom I interviewed. The fact that OASPA is led by Paul Peters, a professional to the core, lends them great credibility. Hopefully they will lead the effort to root out fraudsters from their industry.
Q: Many academic libraries have created open access funds for their authors. The Harvard HOPE Fund, which serves as the basis for many other library funds, stipulates that funding for article processing fees be limited to journals listed in the DOAJ and be a member of OASPA. Your study reveals that even these criteria are not sufficient. Are additional criteria for funding necessary?
Libraries now have one more criterion they can apply: If a journal or its publisher is marked as “accepted” on this page, then beware!
Q: In order to investigate the peer review and business practices of these journals, it was necessary for you to deceive the editors of your identity and intentions. Do you believe that such deception was warranted? As a journalist, do you work by a different ethical code than if you were working as a scientist?
Yes, we have a slightly different set of professional guidelines, though there is a great deal of ethical overlap with human subjects research. I submitted fake, fatally flawed scientific papers to research journals for the purpose of probing their claim of providing rigorous peer review. That falls squarely within the domain of investigative journalism.
This question has also been addressed by a professional bioethicist who studies this issue and he agrees that my sting operation violated neither the professional codes of journalism, nor of human subjects research. Here is how Harvard bioethicist Dan Wikler explains it in the discussion hosted by Peter Suber: “Superficially, the Common Rule might seem to apply to Bohannon’s study, but it’s equally true that it superficially might seem to apply to a vast range of journalistic (and also academic and even literary) investigations that have – rightly – never been considered morally suspect… I don’t think Bohannon’s investigation needs any serious defense either on human subjects grounds.”
Q Is there anything else you learned from this study?
I learned that I have been too naive and idealistic about scientists. I assumed that the results [of my study] would speak for themselves. There would be disagreements about how best to interpret them, and what to do about them, but it would be a civil discussion and then a concerted, rational, community effort to address the problems that the results reveal. But that is far from what happened. Instead, it was 100% political and many scientists that I respected turned out to be the most cynical political operators of all. I got into the business of science journalism (after a PhD in molecular biology) because I had complete certainty in two things: that I love journalism (the investigation, following the facts, speaking truth to power), and that I love science. This experience has shaken my certainty a bit. On the bright side, I learned a huge amount of computer coding!
35 Thoughts on "Post Open Access Sting: An Interview With John Bohannon"
“If I wanted to know what proportion of zebras were infected by a disease, would I have to use horses as a control group?”
Actually, that would be useful. Because if horses have the same % of infection, the message changes from “zebras are in some kind of danger” to “What is up with horse-like animals in general that makes them prone to this?”
Especially if his journal was published by horses, and zebras were driving them to extinction.
Useful, but not necessary. If I do an experiment in zebrafish, must I also replicate the same experiment in goldfish?
You’re talking about two separate experimental questions. Both are interesting, but you can answer one without answering the other.
The ‘lack of a control’ criticism of this story to me is a bit like saying the Washington Post’s reporting of Watergate wasn’t informative because the reporters didn’t compare it to a non-US democracy.
Money corrupts. APCs called into existence predatory publishers, and now the corruption’s spreading up the chain. Why is anyone surprised? Financing OA through APCs is about the worst way of going about things. Unlamented though he is, at least Robert Maxwell blew up the then common model of page charges, taking money out of the author-editor-publish nexus. Bringing it back seems a retrograde step.
Sorry, I just don’t buy most of his responses to critiques. It would be useful to know if these OA journals were targeting African scientists or not. I think that if he really wants to know if these journals were doing their due diligence you might want to examine whether there were caveats to their implementation–like where the authors were from. That would distinguish whether OA journals don’t peer review like they say they do or if they are exploitative enterprises, target certain scientist to make a profit. And just because he isn’t personally trying to be racist with his choice of authors, he’s still playing into a long history of this type of bias. Sorry, straight, white, upper middle class men don’t get to be the arbiters here–they actually think science is a meritocracy.
The African researcher/institution part of the sting Bohannon devised seems distracting and unhelpful.
“I learned that I have been too naive and idealistic about scientists. I assumed that the results [of my study] would speak for themselves. There would be disagreements about how best to interpret them, and what to do about them, but it would be a civil discussion and then a concerted, rational, community effort to address the problems that the results reveal.”
Welcome to the real world of science!
As the editor of a journal, I must admit John Bohannon has increased my paranoia level a little bit. Even though we have real peer review and our rejection rate is comfortably over 50 percent, I’m wondering which submittal will be the next test. This, however, is a small price to pay to beat back the crooks who hurt unwary authors and contaminate the scientific record. Thank you, John Bohannon!
One can accept the validity of this investigation but interpret the results very differently. To my mind, the problem isn’t with the OA concept. Rather, it is with this particular method of financing OA publication, a method that is nothing less than an invitation to mischief.
The writer of a scientific article wants to believe that the journals to which they submit will make their work creditable. They have neither the time or the inclination to examine publishers as rigorously as is reported here.
The fee-based model simply creates a large and vulnerable population of authors that unscrupulous publishers can feed off of. The organizations to which these predatory publishers belong depend upon some number of members and the revenue that they provide, None will question the fee-based model which is common to all members of the organization. Filthy lucre makes this a race to the bottom. The surprising thing is that there are a few of these fee-based OA publishers that do pass muster.
“They have neither the time or the inclination to examine publishers as rigorously as is reported here.” Sorry, I don’t buy this argument. An author who doesn’t know where to publish is in trouble from the get-go–it means he or she is not reading the scientific literature. Because if he did, he would know the journals important to his field and where to publish his own work.
This is the subtext that is too often ignored–too many authors, too few readers, and so content begins to degrade because few are actually paying attention to what gets published these days (not only in OA journals).
I think Matt is making a really important point. The “publish or perish” culture dominates, so incentives to publish have continued to mount. An assumption that “journal” = “quality” is so ingrained that anyone starting a journal gets some equity. What this sting showed is that many of the recent entrants touting the qualities of a “journal” are not up to the task. Bohannon’s interview shows that some of the loudest voices criticizing journals are not devoted to actually improving things, but rather to having their way.
This is a problem that is much larger even than “publish or perish.” It comes down to soft money at universities, educational priorities, and funding priorities. We’re just seeing the symptoms in the world of scientific and scholarly publishing. Unfortunately, some people think throwing unproven therapies at these symptoms will cure some underlying disease. They are wrong.
Your experience reminded me of my recent experience with PubMed Central, eLife, and the Freedom of Information Act. Despite OA publisher complaints starting me down the road the led to a whole host of revelations about how eLife, PMC, and others behaved, my findings were initially attacked as another sign that I’m “anti-OA.” In fact, it took OA voices like those you cite weeks to internalize that unfairness was the story, not OA. I had to write a post about this (http://scholarlykitchen.sspnet.org/2013/02/12/dont-shoot-the-messenger-keeping-our-eye-on-the-real-meaning-of-the-elife-pubmed-central-scandal/), reminding people not to shoot the messenger.
You’re a messenger of an important set of messages, the most important of which is perhaps that people weren’t really surprised by the findings. To me, this suggests that we’ve been quite complicit in the emergence of these bad actors and the incentives that drive them. We need to think about what that means. As I’ve argued before, we risk losing public support if we can’t keep our own house in order. Your work should serve as a wake up call. You should be congratulated for a job well done.
But don’t expect rational, calm, reasoned assessments from the likes of Eisen, Solomon, or others, They’ve demonstrated they are ideologues who are quite willing to attack anyone who they view as falling outside their particular view of OA orthodoxy. How they are able to continue to deny what is actually happening is beyond me.
The Bohannon Sting the Anderson Expose, and Beall’s Blogroll (List) will all be historic investigations in the annals of open access development. In the end most likely prove to be helpful, not harmful. It has been tragic to witness the perceptions of the described antagonists who won’t help make open access become better. Bohannon described a circus; Anderson was beset by obfuscation; Beall gets threatened with lawsuits. The best bright spot seemed to be the new leadership of the OASPA. Let’s hope this leads to a new era.
I think that John Bohannon provided a significant service to the scientific community. The range of comments from the OA community suggest a level of ‘true belief’ that borders on unwavering certitude.
(Disclosure: I am seen as somewhat of an OA advocate)
I think the largest objection I heard was not so much the investigation or the results, but the editorial slant that came with the story. You could hear it in the reporting by second- and third-wave news outlets, which happily repeated the line that open access was shown to be flawed by the investigation. This is to my mind not all due to Bohannon’s article, as one could present the same results in a way that seeks to emphasise the limits of the investigation and clarify what exactly was discovered in the face of what was already known.
For instance, one could say that a large number of small–and on the whole insignificant–fraudulent pay-to-publish publishers were caught out at their game. Many of these were already compiled in the blacklist compiled by Beall. These publishers have been jumping on the bandwagon of open access as well-respected and successful open access publishers have been founded on the business model of asking scientists to pay to publishing costs of peer-reviewed articles rather than selling subscriptions. One could also point out that such pay-to-publish outfits are in the minority in the open access world, as most (and one can state some specific metric here, and make qualifications about size) open access journals do not ask for authors to pay for their articles to be published. On the whole, the result is roughly what is expected: small shonky outfits were exposed in action, the large reputable publishers rightfully rejected the paper, with a few curious exceptions.
Of course, my own biases are obvious in the previous paragraph, but presented with a variation on the above, open access proponents would have been, in my estimation, perfectly happy that bad eggs were being weeded out. Just when Science starts trumpeting that a study has proved that open access journals are clearly corrupt money-grabbers, then it feels like a slap in the face to people who are working hard to make open access both respectable and successful.
PS I am curious to know how the volumes of all the publishers of the journals that accepted the paper combined compare to the volumes of the publishers that rejected the journal, and for fairness consider only the biomedical side of things (clearly Elsevier publishers a lot more that!). One lesson which I feel I’ve learned from SK is that one should be clear about metrics, and look at them all (e.g. proportion of publishers vs publisher’s share of journals vs publisher share of articles etc). Saying hundreds of open access publishers are rotten to the core is like saying Springer is only one publisher among many…
I am currently the Editorial Manager of DOAJ (until Dec. 31) and I would like to comment on the DOAJ reaction to the Bohannon sting. It is my personal opinion I express here.
I think everyone advocating OA should welcome the opportunity to weed out bad journals/publishers. And based on the results and documentation presented by Bohannon I recommended that DOAJ should remove all the journals that accepted to publish Bohannon’s “flawed article”.
However I do not back up the responses from DOAJ, neither of them has been presented or discussed with the editorial team prior to publication. I can only speak for myself but must say that I disagree with the content, the style and the tone of the responses. The reply from DOAJ should in my opinion have focused on the chance we got to clean and improve the content in DOAJ. I would also like to applaud Bohannon for the huge task he has taken upon himself performing this sting operation.
It has been said over and over again that Bohannon has harmed OA with his article, I disagree, the ones who harm OA, are among others, the editors, peer reviewers and publishers who neglect to perform their task.
“Yes, we have a slightly different set of professional guidelines, though there is a great deal of ethical overlap with human subjects research. I submitted fake, fatally flawed scientific papers to research journals for the purpose of probing their claim of providing rigorous peer review. That falls squarely within the domain of investigative journalism.
This question has also been addressed by a professional bioethicist who studies this issue and he agrees that my sting operation violated neither the professional codes of journalism, nor of human subjects research. Here is how Harvard bioethicist Dan Wikler explains it in the discussion hosted by Peter Suber: “Superficially, the Common Rule might seem to apply to Bohannon’s study, but it’s equally true that it superficially might seem to apply to a vast range of journalistic (and also academic and even literary) investigations that have – rightly – never been considered morally suspect… I don’t think Bohannon’s investigation needs any serious defense either on human subjects grounds.””
I ask the question as a serious question because I thought this was a serious issue. Not to take a cheap shot at your study. At least Dan Wikler took it seriously, you apparently don’t. Both of you missed the harm I was referring. A lot of legitimate editors and reviewers actually reviewed the sham article and some apparently took the trouble to provide feedback. That takes time and effort. If some of them thought it was a legitimate researcher in a developing country they might have put considerable time providing constructive feedback. Wasting their time without their permission is an ethical issue in my view. Whether the good the study out weights the time you caused them to waste is a legitimate question in research or journalism that involves deception that impacts on innocent people.
It is no-doubt seen as heretical by some, but I don’t think there is a big problem here. Unless you believe, of course, that scientists should accept the contents of a ‘peer-reviewed journal’ at face value. Naïeve scientists may, but any researcher worth his or her salt is always on the qui vive, ever remaining professionally citical and skeptical when they read an article, no matter in which journal it is published. There is bad science out there, you know. Whatever is left over of a problem with bad articles is reduced even further if you include them in meta-analyses, which more and more needs to be done just to cope with the enormous number of papers that are being published while still trying to maintain a reasonably comprehensive overview of a given field. The bad article will just be an outlier in the data. Scientists know how to deal with outliers. They generally represent one of two things: rubbish or a potential breakthrough. Easily checked. It is highly unlikely that a bad article is considered a breakthrough by the scientific community.
And what about the poor soul who is submitting his article to a ‘predatory’ journal and pays for the privilege to be published? Well, what about it? Tough. Buyer beware. The impact of such a scientist and such a paper is not likely to be more than that of a grain of sand on a concrete slab. The echo of a feather falling into the Grand Canyon, paraphrasing Don Marquis. Certainly no impact crater.
The whole affair is a cyclone in a thimble.
Bohannon has rightly appreciated the decision of OASPA to terminate the membership of Dove Press and Hikari, as a result of his investigation. I also support that as a result of Bohannon investigation, DOAJ has removed 114 OA journals from its list. Very good. Weeding is always necessary. OASPA and DOAJ is taking action to correct its list. Nobody is perfect and revision is always necessary once a kind of ‘peer-review report’ is available. But does anybody know that J Beall has taken any action to correct his famous list? The sting operation and all related discussions on internet is inclined to highlight who failed in this experiment. It is not telling or highlighting about those publishers who passed this experiment but still occupy the seat in Beall’s famous list. I really hate this trend.
Similarly, the results show that neither the Directory of Open Access Journals (DOAJ), nor Beall’s List are accurate in detecting which journals are likely to provide peer review. And while Bohannon reports that Beall was good at spotting publishers with poor quality control (82% of publishers on his list accepted the manuscript). That means that Beall is falsely accusing nearly one in five as being a ”potential, possible, or probable predatory scholarly open access publisher” on appearances alone. (Reference: Phil Davis: http://scholarlykitchen.sspnet.org/2013/10/04/open-access-sting-reveals-deception-missed-opportunities/)
Punishment and encouragement are equally important for the development of this new OA publishing industry.
I have following questions:
1. Has DOAJ any plan to encourage the small publishers who rejected Bohannon’s article after rigorous peer review?
2. Has OASPA any plan to encourage the small publishers who rejected Bohannon’s article after rigorous peer review?
3. Has J Beall any plan to encourage the small publishers who rejected Bohannon’s article after rigorous peer review?
May be people are afraid to touch so called ‘untouchable predatory publishers’ who are sincerely trying to become good publisher and successfully passed the ‘Bohannon-test’. We are so happy and relaxed that at least big 3 (PLOS, BMC and Hindawi) passed the test. So Open access is saved. After all we are afraid of our social reputation and we can not risk it by touching the ‘untouchables’.
We must find out way to improve the status of so called ‘predatory publishers’. We must find out and relabel the small publishers who are sincerely trying to improve their practice by learning from the errors of the past. Working with elites is easy. But I believe OA is lacking a Mother Teresa (http://en.wikipedia.org/wiki/Mother_Teresa), who is ready to work with the ‘untouchables’ to clean their would with love, passion and patience. (I believe at least some of the so called predatory publishers now deserve a better treatment. If we can spread ‘the story of their improvement’ may be it can be an effective medicine to heal the remaining disease)
I think that the following statement by Akbar Khan is very unfair: “That means that Beall is falsely accusing nearly one in five as being a ”potential, possible, or probable predatory scholarly open access publisher” on appearances alone.” Just because some of the journals on Beall’s list rejected the Bohannon article doesn’t preclude them from being “potential, possible, or probable predatory scholarly open access publishers”
The standards being used here seem both unfair and poorly defined. If acceptance of the fraudulent paper is sufficient for terminating the membership of 2 publishers of OASPA and cancellation of 114 OA journals from DOAJ, then shouldn’t the same fate await any journal that makes a mistake in peer reviewing a fraudulent or incorrect paper? Should any journal that has had to retract a single paper no longer be trusted?
And what of those journals that have been labeled “predatory” yet rejected this paper? If one mistake is enough to damn a journal, shouldn’t one example of proper behavior be enough to clear a journal’s name? If the point of these registries and blacklists is to improve OA culture and reduce predatory journals, shouldn’t there be a route provided for those who wish to improve their policies and clear their name? Or is being labeled “predatory” a permanent label, from which there is no rescue, and hence no motivation for trying to improve behavior and become a good publisher?