Source: www.eofdreams.com
Source: www.eofdreams.com

Like (I suspect) many readers of the Scholarly Kitchen, I am regularly invited to submit papers to conferences on scholarly topics that have nothing to do with my areas of expertise or interest: materials science, chemical engineering, organizational behavior, climate change — a wide variety of conference programs are apparently hungry for my insights on every topic imaginable, despite the fact that anything I could write about most of them would amount to little more than gibberish.

As it turns out, there’s an explanation for this: gibberish is, at some conferences anyway, a marketable commodity.

A recent article in Nature reports that over the past couple of years, computer scientist Cyril Labbé has uncovered over 100 papers in more than 30 conference proceedings published between 2008 and 2013 that were not written by scientists or scholars, but were instead generated by SCIgen. SCIgen is a computer program created in 2005 by researchers at the Massachusetts Institute of Technology; using context-free grammar, it generates papers consisting entirely of nonsense. Because the nonsense is syntactically coherent and draws on discipline-specific terminology, it can easily be mistaken for scholarship by nonspecialist readers — particularly if those readers are not paying close attention.

“Paying close attention,” you might think, is what peer review and editorial oversight are all about. Peer review doesn’t necessarily work the same way for conference proceedings as it does for scholarly journals, but the two publishers responsible for the proceedings in which Labbé found fake papers (Springer and the Institute of Electrical and Electronic Engineers, or IEEE) are nevertheless left with lots of egg on their faces. It’s not like a double-blind review process would have been needed to catch these fakes — as Labbé points out, “the papers are quite easy to spot” and any competent proceedings editor with a modicum of expertise in the field should have been able to detect them.

More troubling than the discovery of these published deceptions is the fact that it’s not the first time IEEE has been responsible for this kind of thing. In 2012 Labbé notified IEEE of 85 fake papers in their proceedings publications, and it quickly withdrew them — but in 2013 he identified a new batch of bogus papers from the same publisher. This suggests not only that IEEE’s review and oversight processes were well below par in 2012, but that Labbé’s exposure and IEEE’s acknowledgment of the issue did little to strengthen those processes subsequently.

An additional (and even more disturbing) problem with the proceedings papers most recently discovered is emerging as the investigation continues: at least one of the authors contacted had no idea that he had been named as a coauthor. This suggests that the submissions were more than spoofs — spoofing can easily be accomplished by using fake names as well as fake content. The use of real scientists’ names suggests that at least some of these papers represent intentional scholarly fraud, probably with the intention of adding bulk to scholars’ résumés.

This development harks back to a couple of other notable sting operations intended to expose shoddy or fraudulent practices in particular disciplines or areas of the publishing community. It comes only a few months after science journalist John Bohannon published the results of his investigation into the editorial practices of 304 Open Access (OA) publishers. He submitted a nonsense paper to all of them and it was accepted for publication by just over half. The reaction to his investigation was overwhelmingly negative and defensive — and came not so much from the publishers themselves, but rather from the OA advocacy community, which tended to see his study as an attack on OA generally. For a discussion of the study and the reactions to it, see Phil Davis’s interview with Bohannon published in the Scholarly Kitchen last fall. (As an interesting side note, Davis and Kent Anderson collaborated on a similar sting operation in 2009 after receiving multiple spam invitations from Bentham Publishing; incidentally, they used SCIgen to create the nonsense paper that they offered as bait. Bentham accepted the paper after claiming to have subjected it to peer review.)

Labbé’s exposé also brings to mind a similar and more famous hoax perpetrated in 1996 by mathematician Alan Sokal, who wanted to find out whether a leading journal in the field of cultural studies would “publish an article liberally salted with nonsense if (a) it sounded good and (b) it flattered the editors’ ideological preconceptions.” As it turned out, the journal Social Text was indeed willing to publish Sokal’s piece, despite the fact that it contained such absurdist assertions as “Lacan’s psychoanalytic speculations have been confirmed by recent work in quantum field theory” and “the axiom of equality in mathematical set theory… reflects set theory’s ‘nineteenth-century liberal origins'” (and despite the fact that it explicitly questioned the reality of physical existence). Sokal’s hoax also generated defensive responses and lots of debate, but (in my view) not nearly enough soul-searching on the part of the academic community.

Springer’s response to Labbé’s exposé was just what it should have been: not defensiveness, accusations of bad faith, or attempts to shame and silence him, but embarrassment, corrections, and promises to do better. IEEE’s response (sent to me by Monika Stickel, Director of Corporate Communications, but not available online) is more vague and less apologetic, allowing that “there might have been some conference papers… that did not meet our quality standards” but also saying that IEEE “took immediate action to remove those papers, and also refined our processes to prevent papers not meeting our standards from being published in the future.” According to Nature, those articles are now gone from the IEEE Xplore database — but without any notification or explanation to readers as to what happened to them.

Rick Anderson

Rick Anderson

Rick Anderson is University Librarian at Brigham Young University. He has worked previously as a bibliographer for YBP, Inc., as Head Acquisitions Librarian for the University of North Carolina, Greensboro, as Director of Resource Acquisition at the University of Nevada, Reno, and as Associate Dean for Collections & Scholarly Communication at the University of Utah.

Discussion

30 Thoughts on "The Scam, the Sting, and the Reaction: Labbé, Bohannon, Sokal"

Thanks for posting that, Richard. It is indeed the same statement they sent me, though what I got didn’t include that 19th February addendum.

It’s clear to me that the label “peer review” needs some serious nuance to it, which is part of the reason I’m helping to bring PRE-score to market. “Peer review” is not a single approach across journals, and it’s not uniform within publishers or even within journals. Supplements, conference proceedings, and even different article types get different levels of peer review. As consumers of the literature, some granularity would help.

What seems to be feeding this all is the pervasive notion of “convenient publication.” Authors expect it, and editors and reviewers have internalized it to some extent. We certainly need to cater to authors, but in more of a personal trainer way. Catering to their expectation of convenient publication lands us in spots like this. It’s not healthy.

We also only have so many people with the time and ability to conduct stings. Conducting stings is not a systematic way of addressing the problem. It’s more like population surveillance. The extent of problems when we sample the population seems to indicate a more pervasive root cause.

But I think stings are helpful. It’s like when police put an empty cop car on the side of the road. Everyone slows down.

I agree, they are helpful, but they aren’t a full answer. And I’ve been talking to a lot of editors recently, many of whom had never heard of the Bohannon sting, not to mention Sokal. The empty cop car doesn’t work when the people driving by think they’re the cops.

How widespread is the practice of publishing conference papers that are not presented at the conference (as I assume these were not)? That may be the real problem.

Oh, very often. I know a lot of conferences which has a special price for those who only wish to publish in their proceedings, not even planning to visit the conference. Doesn’t make much sense (well, any sense) to me!

It makes a lot of sense in fields where the primary publications are conference proceedings (e.g. computer science). It allows authors to publish for a fee and also gives the conference another revenue stream.

It makes sense then that computer conferences might be targeted by a computer based scam.

Fully correct, David. Haven’t we just read above about IEEE, which are KNOWN to be highly values?

I’ve chaired conferences before. Let me assure you that producing a proceedings is vastly different from editing a peer-reviewed journal.

1. Time. The schedule for a conference is set in stone. At best, you have time for one round of reviews and revisions. In our journal, we can keep going until we get it right.

2. Reviewer qualifications. Proceedings typically are reviewed by one or two members of the conference technical committee, who may or may not have all the requisite skills. Journals choose reviewers from a large pool.

3. Responsiveness of authors. Without a second round of reviews, what the author sends back usually is what is published, even if the author blows off the comments. Don’t try this with a journal.

4. Direct feedback. The conference session often generates comments from the audience. Unfortunately, proceedings typically have no way of capturing this interaction. Journals have Discussion and Reply.

5. Consequences of rejection. Rejecting a proceedings paper often means having at least one less attendee and an open slot in the program. Rejecting a journal paper typically opens space for another.

With all these limitations, it’s hard to justify calling the proceedings of most conferences peer reviewed in the same sense as a journal is peer reviewed. Our organization now requires only abstracts from presenters. The conference committee typically works with me to select the most promising presentations and encourage full journal submittals following the conference.

Conferences also carry the baggage of visa shenanigans. I don’t know about China’s policies, but for a US conference it’s not unusual for individuals to register and immediately request a letter to get a visa. Too often, the credit card is fraudulent and the individual has no academic record. A computer-generated paper would seem a natural tactic for such individuals.

There’s no question that producing a proceedings volume is substantially different from producing a peer-reviewed journal, but none of this absolves the proceedings editors from their failure to detect the papers in question. Let’s not lose sight of what we’re talking about here, which are not papers that were badly written, or that reflect poorly-executed research, or that demonstrated fatal flaws of bias. These papers were, literally, nonsense — they contained no meaningful intellectual content. The fact that they were sold to the public by well-respected publishers doesn’t reflect an unfortunate but natural consequence of the difference between journals and proceedings — it reflects editorial failure at the most fundamental level.

One paper mentioned in the story is this one: ‘TIC: a methodology for the construction of e-commerce’, front page captured here:
http://www.deepdyve.com/lp/institute-of-electrical-and-electronics-engineers/tic-a-methodology-for-the-construction-of-e-commerce-X4RSbStfJN

The paper is still listed in the Table of Contents pdf for the 2013 QR2MS Conference, top of page p. 33):
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6625522

The proceedings for this conference runs to around 2170 pages, and around 500 papers!

While the Labbe nonsense papers were primarily proceedings, as Kent mentions the problem you address relates to peer review, which is where PRE-score hopes to make a positive impact. One point is that I believe at the time of the Sokal hoax, the journal Social Text was not peer reviewed. If it had been who knows what might have happened.

As for Ken’s point about the differences between peer review of proceedings and a journal, my experience is the same and I agree. However, there is still no excuse for allowing blatant “nonsense” papers through. It seems to indicate no one looked at the papers at all.

I agree. It’s easy for me to see why some “presenters” would submit these papers. Much harder to understand how they got through any legitimate vetting system.

The TIC paper mentioned in my comment above was one of around 500 papers in that proceedings. According to the QR2MSE conference call for papers site, they had just over a month to review them once the call for papers closed. So I’m guessing it’s not even remotely close to a proper vetting process.

The Nature article says Labbe identified more than 120 papers in more than 30 conference proceedings. How is that meaningful without knowing the total number of papers contained in those conference proceedings? If we’re talking many thousands of individual conference papers, is it really the ‘huge scandal’ in academic publishing it’s being made out to be? In some, It’s amazing that there were *only* 120 fake papers, and that Labbe was able to find them.

Even if there were 12,000 papers in the 30 proceedings that would be one percent fakes, which is a huge number. But it sounds more like these specific conferences were deliberate targets. I am not convinced that there is real problem here as far as scholarly publishing is concerned, other than a few conferences being targeted.

In many journals proceedings do not under go the same level of peer review. I think the proceedings are created electronically and often just get read into the journal, more of a filler than anything else. The production process does not really pay much attention to this literature. Perhaps the proceedings editors should pay more attention. Not a big deal.

Call me a Boy Scout, I guess, but it seems like a big deal to me. If you can’t count on a publisher to filter pure and literal gibberish out of its proceedings publications, then I’m not at all sure what legitimate purpose the publisher is serving.

The problem is that allowing these second- or third-tier materials to exist under the brand of a legitimate journal is that they are then ripe for misappropriation. We went through a period of regular misrepresentation of “findings” from scientific meetings when novice science journalists found supposed stories in these proceedings books, and weren’t savvy enough to realize how preliminary these abstracts are. It’s still a sort of filter failure, and with the Internet making it so that distribution boundaries aren’t a reliable filter, we need stronger filters before we distribute.

The bogus papers for conferences in China could be a tax scam. Find a conference vaguely related to your field, submit a “paper,” get your name in the program, and, voila!, your boondoggle becomes a tax-deductable business trip. At least it is until the IRS gets wise to this scam. I hope they’re reading. More likely, it will just mean another hoop for legitimate conference organizers to jump through.

Interesting thoughts. Whose responsibility will proving otherwise be when the IRS thinks a legitimate conference in, say, China is a scam?

It seems to me that the important questions are who is submitting these bogus papers, why and how should they be punished (if at all)? It is not a sting because they never reported it. It is not scientific fraud, just submission fraud. Perhaps it is an attack of some sort, or maybe just a game. The other question is how widespread is this game?

David, you’re right that the Labbé study (unlike the Bohannon, Sokal, and Davis/Anderson operations) wasn’t really a “sting.” But who submitted the papers and how they should be punished are actually not the most important questions here, interesting and important though they might be. The most important question is how literal nonsense (not just poor-quality science) made it all the way through the editorial processes of two major and highly-regarded science publishers and was then presented to the world by those publishers as scholarship. The buck stops with the company whose logo is on the product (and on the invoice). Discriminating between high-quality and low-quality scholarship is exactly what subscribers pay publishers like Springer and IEEE to do. If their editorial processes can’t even discriminate between coherent writing and literal nonsense, then Springer and IEEE have very serious problems that need to be addressed if we are going to continue trusting them as purveyors of scientific communication — and those problems exist independently of questions about how many scam artists there are in the world and what should be done about them.

Rick, I was not referring to the Labbe study when I said there was not a sting but rather to the submission of the bogus papers. I want to know why this is happening and who is doing it? It is a lot of bogus papers. For example, is it a social movement or just a contagious prank or something more sinister? Are we going to be swamped with bogus submissions? How do we prevent that?

Note in that regard the pingback below that alleges that “fake papers are flooding academia.” How true is this “flooding” claim? If it is false, which I suspect, then we really have a serious rumor problem.

I am less concerned about the bogus papers being published, because in the conferences where I have presented there was no peer review. My appearance was based on a proposal and I did not expect anyone to read my paper before it was published, much less to possibly reject it. But then I come from the publishing realm that does not use peer review, namely federal research reports. There the view is that if we paid for it we will publish it. In many conferences the parallel rule seems to be that if you present it we will publish it. That is why I first asked if these bogus papers were presented?

So I guess you and I just have different interests. You are alarmed that the bogus papers were published but I am alarmed that they were submitted.

“Discriminating between high-quality and low-quality scholarship is exactly what subscribers pay publishers like Springer and IEEE to do.” In my limited experience with Springer published journals over a ten-year period, I have never had any indication from Springer that they were interested in the content of journal papers (or conference proceedings published in a journal). Springer was interested in the version of Adobe Distiller used to produce pdf files, how many dots per inch in illustrations, etc. I know that Springer does publish some journals and books, and I can presume that Springer exercises some editorial control in those cases. My experience has only been with Springer acting as a printing house (and claiming to be the publisher). I have no experience with IEEE.

Comments are closed.