The Funky Phantom
The Funky Phantom (Photo credit: Wikipedia)

Halloween wouldn’t be complete without a ghost story, and while we’ve covered ghost writers in the past, a bizarre paper retraction takes things to a new level.

Bruce Spiegelman, from Harvard’s Dana Farber Cancer Institute had presented his lab’s findings on two novel proteins involved in diabetes and obesity in mice at a variety of scientific meetings. Much to his surprise, those same results were published in a paper in Biochemical and Biophysical Research Communications (BBRC). Even stranger, the paper’s authors don’t seem to exist.

Declan Butler covers the details of what happened in a Nature News article, and Adam Marcus wrote it up for Retraction Watch as well. The listed authors appear to be made-up names–conglomerations of names of researchers in the field. BBRC is billed as a “rapid communications journal,” so likely a peer review system that emphasizes speed contributed to this paper getting through undetected.

Understanding the motivation for doing something like this is baffling, for taking the time to write up a paper, creating figures based presumably on what was presented at the meetings, etc. Who has the time, and more importantly, why?

Spiegelman notes that he had already filed patent applications before the data was presented publicly, so it’s hard to see any financial motivation; and with the authors being fictitious, there’s clearly no attempt by the fraudsters to gain academic credit for themselves.

Perhaps this is the work of some crazed advocate, trying to make a seemingly obscure point about peer review (a protest against double-blinding in favor of single blinding?); a pro-ORCID statement (though as pointed out, ORCID would not have helped here as fake ORCID accounts could have been created); maybe an attempt to drive usage of preprint services (though here, there’s no question in the community who the data belongs to). Suggestions are welcome in the comments below.

Spiegelman is left to speculate that this is a malicious personal attack against him and his laboratory, and perhaps he’s right. Priority, being the first to discover something new, remains a cornerstone of academic career credit. Even after detection and retraction, the fraudulent paper may harm Spiegelman and his students from publishing the true version of their results in the journal of their choice. The cat is, in some ways, out of the bag. One hopes that journal editors will be sympathetic in this case, and not penalize Spiegelman’s group for something that was not their own doing.

It remains worrisome though–over the years, researchers have become more and more hesitant to discuss unpublished data at public meetings. The presence of bloggers and Twitter users have turned meetings into a broadcast medium, where anything said is instantly spread worldwide. This has been great for the rapid dissemination of knowledge, but at the same time, it has made meetings more dull, full of presentations of results you’ve already seen. Incidents like this certainly aren’t going to help.

At the very least, as we all work to streamline our peer review processes to better meet author needs for rapid publication, this ghost story should serve as a warning that some corners should not be cut. Verification of author identity may be a step worth adding, but even so, it’s hard to fault the journal here. The very nature of scholarly publishing is based on good faith. Journals assume authors are who they say they are and that deliberate attempts to defraud are rare.

I don’t think we’ve reached a point where we need to revamp our approach, to start with a notion of guilty until proven innocent, and really, the thought of doing so is repulsive. But as competition for dwindling research dollars and jobs intensifies and anonymous online communication continues to erode personal responsibility, that may become an archaic notion. This very strange and ultimately pointless act is an isolated incident, but is perhaps indicative of a larger shift in society. Still, I’d like to keep some faith in the purity of academia, and share Rachel Toor’s hope that we can do better:

Something in me wants people with advanced educations to be better than the lugheads who write the barely intelligible nasty anonymous comments on other sites. I dream, with an innocence I cling to, that academics can be better than the teens who bully each other into depression and suicide. I want our students to see us as examples—not only of how to write and how to argue, but of how to behave.

David Crotty

David Crotty

David Crotty is a Senior Consultant at Clarke & Esposito, a boutique management consulting firm focused on strategic issues related to professional and academic publishing and information services. Previously, David was the Editorial Director, Journals Policy for Oxford University Press. He oversaw journal policy across OUP’s journals program, drove technological innovation, and served as an information officer. David acquired and managed a suite of research society-owned journals with OUP, and before that was the Executive Editor for Cold Spring Harbor Laboratory Press, where he created and edited new science books and journals, along with serving as a journal Editor-in-Chief. He has served on the Board of Directors for the STM Association, the Society for Scholarly Publishing and CHOR, Inc., as well as The AAP-PSP Executive Council. David received his PhD in Genetics from Columbia University and did developmental neuroscience research at Caltech before moving from the bench to publishing.

Discussion

28 Thoughts on "A Scary Story for Halloween: The Curious Case of the Phantom Authors"

It worries me you said “Verification of author identity may be a step worth adding”. [emphasis mine]

So checking from external sources (before publishing the paper, at latest) whether author-provided identity and affiliation data is correct is not an established practice in any and all reputable scientific journals? Of course such checks (like simply searching staff list at institutional website, or googling the author-provided email address) are not immune to a really determined fraudster willing to create fake websites etc, but it would be astonishing if even a basic check is not an industry-wide standard practice?

Many people place trust on the editorial practices of respected journals. Is that trust unwarranted?

Imagine, for example, someone writing a paper (in itself not fraudulent), but fraudulently listing institutional affiliation outside author’s home country. And then competing for a job citing that publication as independent evidence for actually having been employed at some international institution.

Honestly, in my experience, a deep background check on authors is not usually part of the editorial process. Any current editors want to weigh in?

Most authors are publishing via an email address from their listed institution, which is likely a sign of affiliation. In this case, the generic email addresses should perhaps have been a tip-off that further investigation was warranted. But still, it gets to the question of whether journals should see themselves in a supportive role or an antagonistic role to authors.

I have to agree with this, based on many years of interactions with editorial offices. There’s a lot of trust given to authors. This can lead to authors misrepresenting co-authors, authors fabricating data, and authors dropping potential co-authors.

This story is a new twist. Usually, it’s clear that someone is pursuing outsized academic credit by abusing the trust given to authors. This isn’t about any individual obviously benefiting, as the culprits are anonymous so far. Rather, it’s most likely some sort of academic sabotage — seeking to drain novelty away from a competitor’s claims by publishing them early under an assumed name.

Publish or perish doesn’t apply here. It’s more like theft via publication.

35 journals, over 9000 submissions, 5 editorial coordinators. Nope, we don’t google each and every author to determine that they are a real person. We get hammered every day that it takes a ridiculously long time to get through peer review. I can only speed up staff time and try to convince editors to speed up review time. I also get hammered on the fact that we “ask too many annoying questions” of authors when submitting a paper. Questions like– have all authors seen this paper, has this paper been published or presented elsewhere, provide an email for each co-author, do you have permission to use all data, figures, and tables? This are too “time consuming” for authors and apparently “no one else” asks these questions. Heck, we still require a signed authorship and copyright transfer form and when authors break the rules on that they literally laugh in my face when I tell them they signed a form attesting to certain facts. So no Janne, journals don’t do a background check to make sure authors aren’t lying to us. We assume they are telling the truth and when we get burned, we take the hit in Retraction Watch or some other place. I count on my coordinators to stay alert and notice weird anomolies that might point to a problem but when peer review is expected to take no more than 30-60 days, something is going to get overlooked.

So it is caveat emptor for subscribers? It would seem to be prudent to protect readers (and, from business perspective, the journal brand) by doing a final sanity check before releasing an article to the world, at least if authors have given a generic email address instead of an institutional one.

Its gets worse, too… Some journals use reviewers suggested by authors without checking they are real people. There were recent cases in some journals where authors were able to use fake identity (=simply a gmail address) and then be peer reviewers on their own articles.

One could of course claim in those cases too, that there is not enough editors or editorial staff, and too rushed schedule, to carry out a more thorough procedure, and that journals must be able to trust authors when they suggest reviewers, and must be able to make decisions based on reviewer comments without checking if they make sense. But that argument might then raise the question why editors are needed at all.

I think it’s a matter of how you define “sanity check” and concentrating your efforts in the most effective manner. Is faking an identity really a common practice? Is that the best use of editorial time (keeping in mind that a great number of journal editors are working researchers who are volunteering their time)?

If this becomes a common problem, then yes, journal offices will indeed make it a part of their standard practice. But for now, this incident stands as far out of the ordinary. Though I do agree with you that I’d likely double check anyone using an email address not directly associated with their listed institutional affiliation.

A lot of reviewers, authors and editors use generic email accounts. We are now requiring two emails and at least one needs to be at an institution. The hope is that through Peer Review, a reviewer might think it’s weird that he/she has never heard of these people. The reviewers often Google authors to see what other stuff they have published. That said, a reviewer has never asked my office to try and verify that a person is real. Our reviewers also find duplicate submissions or significant overlap with papers that is NEVER detected with CrossCheck.

I am glad that you see value to what staff at journals could and should do to protect the reader from this kind of fraud. Many seem to think that peer review is cheap or easy and that publishers should not be compensated for their expenses. I think a lot of society journals that are being managed with limited resources are reactive because we don’t have the luxury of being proactive. We are just doing the best we can.

Generic email accounts are quite important for contract researchers. I currently have four institutional email accounts – university student account, two for contract research I’m currently doing, and a (temporary) university affiliation. The three non-student accounts will only exist until the end of my contracts. They are as precarious as the grant funding that pays my way. In this context, a generic email account is one way to have a stable email addess.

The student account is an ’email for life’ deal administered by Microsoft, but only proves that at some point I was enrolled at the university.

The very nature of publishing would suggest that journal’s primary concern should be readers, even (or especially) when that is at odds with having a supportive role to authors. So while journals can and should be in supportive role to authors, they can’t be where that is in conflict with being in supportive role to readers.

I presume the real author wanted to test if this journal would publish a plagiarized paper, for whatever reason.

‘Kasper’ posted this theory over at Retraction Watch
http://retractionwatch.wordpress.com/2013/09/30/journal-withdraws-diabetes-paper-written-by-apparently-bogus-authors/

“By publishing Spiegelman’s data before him, you effectively prevent him from publishing the same dataset. He would have to produce additional data, which could take years, or submit a low-impact version of what he has as support for the first article, at the risk of being charged with plagiarism himself if the data added up too well. This could hurt him professionally, making it harder for him to achieve funding, new assignments, tenure, or other jobs in the field.
It’s a nasty but potentially efficient way of eliminating competition in the field, or just getting petty revenge.”

Perhaps we will eventually have to obtain and use a personal or professional digital certificate. Digital certificates are what ‘secure’ web sites use to assure visitors that the web site is what it claims to be; a bank or a publisher’s submission site for example. That same certificate also enables the use of secure sockets layer (SSL) which assures the recipient that nothing was intercepted or altered in transit. Not the same as old fashioned interpersonal tract but it’s less prone to mischief.

This problem could be immediately solved if journals charged a small submission fee at least to the submitting author and possibly to all authors named on the manuscript. The credit card process ensures a high-level of individual identification fidelity. (It also discourages spurious submissions and generates new incremental revenue). Richard.

This is an interesting idea but it assumes that all authors have credit cards (therefore limiting authorship, perhaps severely, to card holders), and that authors pay for publication expenses out of pocket. As a grad student, I paid a small submission fee off a departmental credit card and the page charges from a grant number managed by our finance dept.

Would requiring that all authors first register and be verified by ORCID be an alternative and acceptable proposal?

Good point–each university seems to have its own purchasing system–at times it becomes very difficult for an individual student to buy anything directly themselves. You do also start limiting submission of articles to those who can afford submission fees, setting a “pay to play” tone that may be problematic.

I’m not sure ORCID is the answer, at least until ORCID has some system for verification. As far as I can tell, no such system is in place.

Payment by the author’s employer (typically a university) is an even better validation of their identity because the university won’t disburse funds without appropriate paperwork from the requester using internal systems. Universities won’t disburse funds for faculty that don’t exist or who disclaim knowledge of the manuscript!

Having the university-validate the ORCID is certainly an alternative way to solve the problem, but creates a massive new administrative burden and requires the creation of new processes that don’t exist. In contrast, payment processing is an established methodology that can be easily leveraged to greatly increase the fidelity of author identity validation by the mechanism of a small submission fee. Organizations like CCC and EBSCO are developing back-office solutions that will further facilitate this back-office workflow where payments are requested by faculty but paid by the university.

As previously noted, submission fees also offer major advantages over APCs (Article Processing Charges) because they eliminate the perception of bias in favor of accepting an article for publication; and they provide fair compensation in cases where the journal manages the review process for a manuscript that is rejected.

Richard.

I think submission fees also offer a pathway to doing high quality, highly selective OA journals. The problem now is that the accepted articles have to pay for the rejected articles, which applies economic pressure toward accepting as many articles as possible. Submission fees would help because they’d 1) drastically reduce the number of wildly inappropriate submissions that still must be processed and 2) they’d help pay the cost of rejecting articles.

The real stumbling point on implementing them is that it puts a journal at a competitive disadvantage when other journals are not charging submission fees. Should I send my article to journal X or journal Y? Y charges $100 for submission, and given how tight my funding is, I’m likely going to choose X, all other things being equal.

But if an OA journal charges only submission fees, i.e. no publication charges at all, things change. Journal now has financial incentive to maintain as high submission rate and rejection rate as possible. Highly selective ~ highly prestigious platform for a scientist.

A $100 shot at a highly prestigious journal, versus $1000+ APC at non-selective journal? I would go for the first option with my papers, even if I had no funding.

But there must be some catch, otherwise the top-tier journals should have taken this avenue already?

I suspect the “catch” is that it’s not possible to sustainably run a journal on a revenue stream that low. You’re still competing against journals that charge nothing for submission so the number of papers sent in will drastically drop off. But if we ignore that, and figure your journal gets 1000 submissions per year, a $100 charge means total revenue of $100,000. Or if you accept 30% of those submissions and charge $1000 per published paper, your revenue stream is now $300,000.

And those numbers are for a journal in a large enough field to allow that many submissions. I work with some niche journals that see closer to 200 submissions per year. Look at it on that scale–200 subs at $100 per sub and you get $20K, accept 40% of those subs and charge a $3000 APC and you end up with $240K. Seems a lot better way to pay the bills and keep the lights on to me.

Yes, I guess a journal would have to charge several hundreds just to cover costs, considering it would have to offer labour-intensive high-quality services to justify the cost and the rejection rate to prospective authors. And if the journal wanted to reach industry-average profits, then much more.

Science ( the journal) and its ilk could probably carry on sustainably with, say, a $200 submission fee only, and since AAAS is a “non-profit” why wouldn’t they… Oh yes, because then the nonprofit would not generate maximum profits.

It’s less a question of “maximum profits”–most NFP’s that I’m familiar with do what they can to reduce costs to the academic communities they serve. But it is a question of survival, as NFP’s have to turn a profit in order to stay alive. If you want to do anything new, build any new functionality or experiment with new products, you have to be able to afford that. You also have to have a savings account in case of disaster or major market changes that need to be weathered. It’s why a NFP like PLOS is stockpiling their enormous profits now rather than cutting APC levels and running at break-even. The funds give them freedom to experiment and to do new things to better serve the research community (and to be able to face an uncertain future marketplace).

But personally, I would love to see what would happen if a Science or Nature charged a very small submission fee ($10-$20) just to eliminate all the frivolous submissions they get sent that don’t stand a chance of being published. Likely the savings from cutting down on submission numbers would vastly outweigh the minimal revenue brought in.

There are plenty of ‘travel money’ type credit cards specifically marketed for online shopping. They sell them at my local supermarket and post office. No ID required!

I would have been inclined to think this was a provocateur intent on showing Elsevier to be just as vulnerable to publishing good-looking garbage as those awful Open Access people. But then reading the comments on the RetractionWatch version of this story, it does seem like perhaps Spiegelman may have people out to poke holes in his work.

But the science in the paper is reportedly accurate and of high quality. The paper itself isn’t garbage. It’s really hard to parse out any sort of motivation here.

Trust network is another solution. That is how it works in Peerage of Science:

You can only get a user account via invitation by a trusted user, or via requesting invitation from administration. For requested invitations admin always checks email, identity and qualifications from external sources. Furthermore, becoming a trusted user (=right to do peer reviews, right to invite colleagues) requires same check by admin, even if you have been invited by a trusted user.

This allows combining relatively strong anonymity with trust. Even when authors choose the option to remain anonymous, you know that person has been either checked by admin, or has been invited by someone checked by admin. And each and every peer reviewer has been checked by admin. You can not have two trusted-user accounts in Peerage of Science. You can not have a trusted-user account with false identity, unless you first create an externally verifiable history including peer-reviewed publications with that identity.

So, only Rabbit or Rabbit’s friends may join the club? That is a serious barrier for new entrants from developing countries or people who choose not to work within the system. We have to deal with cranks in mathematics but that’s a price worth paying to keep the entry barrier as low as possible which, these days, is the ability to fill in a form and upload a PDF. Using your system, Ramanujan would not have had a chance.

Susan, maybe you missed the “or via requesting invitation from administration”?

A submission is treated as implicit request for user account.

Two outcomes are possible for submissions by new non-users:

1) Based on information we obtain, we can be reasonably confident the person is a scientist who has published at least one article, as the first or corresponding author, in an established international peer-reviewed journal ==> manuscript goes to peer review, and the person gets trusted user status immediately.

2) Based on information we obtain (usually because the person says so himself), we know the person does not yet have publications qualifying him or her to trusted user status (usually these are PhD students) ==> manuscript goes to peer review, but person does not get a user account immediately. Basic user account is given after this new submission gets its first peer review (unless reviewer recommends discarding the ms). The person can then request trusted user status after his or her first article is published somewhere.

So, no barrier.

But related to David’s post above, you notice the gaping hole in our validation process: if a person can easily publish an article under false identity in a journal we trust, then that abuse of trust extends to our system as well.

However, it would take a determined fraudster to first go through an entire process (I gather the average time from ready article to acceptance somewhere is over a year), and then use that published article to gain fraudulent user account in Peerage of Science. But I would be happier if it was impossible, rather than merely far-fetched.

The journal in question permanently withdrew the suspect article after an investigation. I know these things never completely disappear from the web, but any reputable journal could easily check out Spiegalman’s story, presented in a cover letter, and proceed with review and publication. A nasty incident, but I don’t see it blocking a real publication.

Comments are closed.