Editor’s Note: Today’s Guest Post comes from Phaedra Cress, Executive Editor, Aesthetic Surgery Journal.

Most Americans lie one to two times daily according to an article in the Journal of Language and Social Psychology. Yet, we are “truth biased” to believe that the majority of messages we interact with are honest versus dishonest. This chips away at our lie-detecting skills and in a field (or an entire era?) fraught with transparency issues, it can be incredibly detrimental.

What’s the difference between telling someone they look great in those jeans versus I didn’t really read that entire manuscript, but I came “close enough” to provide peer review comments for it? Was that employee fired or laid off, and how can recruiters tell the difference on their LinkedIn profile? Subtle nuances tell the real story but can be hard to discern just like the various shades of grey.

Pinocchio puppets

Nearly all of us have engaged in some form of writing, editing, or research during our professional careers, especially those who’ve built their careers in publishing. We endeavor to hold ourselves to the highest standards in all that we do, in both our work and personal lives, including our Instagram stories.

But do we?

Take, for example, former White House Communications Director, Ms. Hope Hicks, who confessed to the House Intelligence Committee that she would ‘tell white lies’ for the president, shortly before she resigned. We want leadership by example, but we don’t always get it. And slowly, this has eroded many facets of publishing.

So much of what we do is variable, left to our discretion, or subjectively interpreted. Should I run an erratum because that author is asking me to or will my journal look bad as a result? Am I ethically obligated to verify a copy of that permission form uploaded to the journal’s submission site? Has an author ever tried to access a full-text article, failed, then scanned the abstract and cited it, not really knowing much about its full contents but taking it at the abstract’s value?

It’s easy to understand how white lies happen but that’s no excuse, especially for publicly traded companies. When Mark Zuckerberg, Facebook’s CEO, took the stand, he espoused what he did or didn’t do to protect all of our private information from being sold at auction. We want to believe him, but at gut level, most of us suspect that there is a small technicality that makes his “truth” feel more like a lie.

How much of what we do — or don’t do — in our daily jobs affects science and medicine globally in healthcare publishing? For “authors” like me who occasionally write an opinion piece, my successes and failures are not likely to move the needle too far in any direction. But consider academicians, researchers, or scholars who have made it their life’s work to create scholarly, influential outputs. It matters. Fake news may sell but it doesn’t fly in science and medicine. Eventually most offenders get caught. Or, so we like to think.

Everyday my inbox delivers notes on shamed authors whose work has been retracted, offering one-liner nuggets about their “crimes.” They are eye-catching, no doubt. Authors are stealing articles. Peer reviewers are faking comments and using faux accounts to further their reputations. Images are being doctored — or worse yet — stolen and repurposed as one’s own. Just say the word Photoshop and you might as well be guilty.

Thirteen years ago, an article published in Nature Materials warned that we were in trouble — the sustainability of scientific publishing as we know it was in jeopardy. Salami slicing was relatively new then, but today we’ve all seen, read, or peer-reviewed salami-sliced articles in which one study is chopped up into multiple submissions, allowing the author more entries on their CV. The COPE website contains salami publication/submission examples dating back to 1998. They didn’t begin using the term ‘salami slicing’ until 2012, which feels like about 5 minutes ago. Was this the beginning of a new ethical downslide?

Where is the moral compass that should be steering us all in the right direction and how have we lost control so completely in such a short span of time? Have we grown naïve or conditioned to accept new ethical assaults that recur until they’re so familiar we fail to notice? Nobel Laureates have gone from renowned to infamous with one stroke of the pen, or iPhone swipe. Some countries overseas have become notorious for salami-slicing articles and submitting them to journals, hoping editors don’t look too closely. Add to a modern-day editor’s resume the need to manage language barriers and dialect issues, new detection tools, and to hire psychosocial managers to help keep everything above board, and someone doing an editor’s job 20 years ago would be virtually unrecognizable to their modern counterpart.

Are authors who’ve been burned and shamed by retractions, more likely to tempt fate a second time? Two researchers from Japan found that those with retractions under their belt are more like to commit misconduct again because of the Power Law — a mathematical model that applies to various physical, biological, and social phenomena distributed over broad magnitudes including earthquakes, moon craters, citation of scientific literatures, etc. Their work was prompted by the RetractionWatch “leaderboard” showing 3 Japanese authors in the top 10 list. Here they said, “You found that 3-5% of authors with one retraction had to retract another paper within the next five years — but among those with at least five retractions, the odds of having to retract another paper within the same time period rose to 26–37%.” It’s a staggering number.

Pile on another shade of grey — citation stacking: asking reviewers or other authors to cite specific work. What hue is this and where is the line (read: white lie) between acceptable (i.e., the work is relevant and adds to the manuscript) versus unethical (i.e., an author or editor is trying to stack the deck to improve its image or Impact Factor)? Journals have been making these “gentle recommendations” for years, but does that excuse it or make it okay? There’s no publishing bible for any of this and as mentioned above, we rely a lot on the morality, ethics, and good judgement of our editorial teams.

Just as we use programs such as Crossref’s Similarity Check to scan for plagiarism, should we be scanning every figure, image, and table submitted, also looking for manipulation, plagiarism, or duplicate publication? Should we create a job description solely for the person who does this? I’m thinking Editorial Ethics Coordinator or Publishing Crime Scanner. How are staff supposed to spend all their time looking for bad guys when we’re supposed to be focused on publishing for the good guys?

In academia, it is commonplace for a senior author to delegate work to a junior author or researcher; and sometimes, they learn the work had been compromised. Is the senior author complicit or negligent, despite how acceptable a practice is?

As we all know, Clarivate Analytics can suppress journals that game the system through tricks such as high degrees of self-citations or other unethical behavior. But why does the threshold for disciplining a journal seem so unreasonably high? And what effect does that have on authors who’ve published in those de-listed or suppressed journals? RetractionWatch covered the topic recently here. It seems to reason that authors’ work — and by extension their reputations — suffer when they publish in predatory journals and/or those that are de-listed or suppressed. Some may be gaming the system while others are caught in it inadvertently, even though the consequences may be the same.

There seems to have been a proliferation of remedies that have evolved from the standard erratum and retraction options. Within the past 10 years, a “Statement of Concern” was introduced and now we’re seeing the “retract and replace” method along with the National Library of Medicine’s shade of grey tag called “corrected and republished.” How is the average reader supposed to decipher the nuances of these levels of “fixing?”

A “white lie” here and there may be necessary in the mind of some; but, they can pile up, and lead to accuracy questions for the academic literature. As scholarly publishing moves to a culture of accepting constant revision (and loads of versions housed in multiple places), what are our responsibilities as editors and publishers in making sure the reader gets the truth with each and every read?

I suspect there are little white lies that have, over time, become the norm. We have better tools to find the cheats, but also to cheat the system. If we talk about the white lies, the short cuts, and the “one-time exceptions” to the rules as a community, we’re more likely to know what happens behind the curtain — and we can help one another stay honest.

Discussion

12 Thoughts on "Little White Lies in Healthcare Publishing"

My impression is that salami slicing and citation stacking is less common than it was. I now get this impression as a research interviewing early career researchers whose mentors seem to have warned them against such practices but even ten years ago when I published 27 dental research journals editorial boards were nervous about such practices. Do we have any evidence that these practices are increasing? Has anyone done a study showing change?

Hi Anthony, while I generally agree with you, there are some countries where it is still quite prevalent and we still need to be vigalent. I’d be very interested in a study like the one you mentioned. I’ll keep eyes peeled!

My only objection to this article is the title, which understates its applicability. My sense is that literally every example mentioned by the author applies to virtually any area of scholarly publishing.

Rich, examples certainly vary from discipline to discipline such as medicine and science vs humanities and social sciences. But at the end of the day, the work we do transcends specialty and the same principles (or worries) apply. Thanks for your comment!

Salami publishing was a problem being discussed much before 2012. Edward J. Huth wrote about it in 1986 (Annals of Internal Med., 104:257-259). He called it “salami science”. “Salami slicing” shows up in a 1999 article: Salami slicing, shotgunning, and the ethics of authorship. (AJR. American Journal of Roentgenology. 173(2):265, 1999 Aug.)
Librarians have long been concerned about this practice because it was a big contributor
to an increase in the number of articles being published which then resulted in higher journal subscriptions costs.

Hi Elizabeth, I was referring specifically to the COPE archives with relation to salami slicing dating to 1998. I’m sure it was happening well before that, perhaps more under the radar than we realized. Thanks for the reference, I’ll plan to read it. I’m sure librarians have been watching this phenomenon carefully…and beside raising subscription costs and bundling journals, thereby forcing librarians to take subscriptions to journals they don’t really want when they purchase what they do want, salami slicing has an impact on performance, promotions, etc. as another reader commented astutely.

It’s not just “slicing” but having made a bunch of salamis from the same basic recipe but changing one or two ingredients and then slicing and stacking. Left-handed monkey wrenches XX: red handles. The real issue is why this is occurring- largely for promotion and tenure by the numbers rather than for collegial communication, much of which could be published as “notes” or “letters” that have now gained the stature of “journals” for those who look at the numbers with even less concern than those who cite based on abstracts, not having read the articles. As one person on this list has suggested when doing such evaluations, “read the articles!!” Perhaps that’s where the focus should be.

Tom, I couldn’t agree more. Same recipe, one new spice (at best). And yes, reading the article is hands-down the bare minimum researchers should do before assuming they know it’s intent and import and using it as fodder to prove their own point. Thanks!

A less common (mal)practice to data segmentation or disaggregation (i.e., salami slicing) is data augmentation where the author simply adds new data points to an already published data sample (see https://ori.hhs.gov/plagiarism-15#DataAggregation). Whether the practice is segmentation or aggregation, the result is usually the same: A distortion of the scientific record. All of this is a symptom of a malady affecting way too many scientists, which is that they seem to have forgotten that their primary goal is the search for truth, not the quest for a longer publication list.

Miguel, well said and thanks for bringing up this point along with the link. Cheers.

Here, I report an interesting case of salami-slicing from Neuroskeptic in discovermagazine:

A single project regarding mental health has been done in Iran and then it was divided to 31 papers, as Iran has 31 provinces.
All papers has had a same methodology and many parts of articles including introduction, methods and materials, and even results and discussion had been similar to each others.
All of the papers has been published in the same day in a supplementary issue.
In all of the papers, some of authors are the same. One of the author, is the associate editor of the journal (archives of Iranian Medicine), published these papers.
Several months before publishing these papers, Archives of Iranian Medicine has published a paper containing general information about Iran, but not by province. All of these 31 papers has cited this mentioned articles and made it as an highly cited papers in the ESI Clarivate Analytics.
you can see the papers in PubMed:
https://www.ncbi.nlm.nih.gov/pubmed/?term=%22Arch+Iran+Med%22%5Bjour%5D+%22mental+health+status%22+2015

Comments are closed.