Editor’s Note: Today’s Guest Post comes from Phaedra Cress, Executive Editor, Aesthetic Surgery Journal.
Most Americans lie one to two times daily according to an article in the Journal of Language and Social Psychology. Yet, we are “truth biased” to believe that the majority of messages we interact with are honest versus dishonest. This chips away at our lie-detecting skills and in a field (or an entire era?) fraught with transparency issues, it can be incredibly detrimental.
What’s the difference between telling someone they look great in those jeans versus I didn’t really read that entire manuscript, but I came “close enough” to provide peer review comments for it? Was that employee fired or laid off, and how can recruiters tell the difference on their LinkedIn profile? Subtle nuances tell the real story but can be hard to discern just like the various shades of grey.
Nearly all of us have engaged in some form of writing, editing, or research during our professional careers, especially those who’ve built their careers in publishing. We endeavor to hold ourselves to the highest standards in all that we do, in both our work and personal lives, including our Instagram stories.
But do we?
Take, for example, former White House Communications Director, Ms. Hope Hicks, who confessed to the House Intelligence Committee that she would ‘tell white lies’ for the president, shortly before she resigned. We want leadership by example, but we don’t always get it. And slowly, this has eroded many facets of publishing.
So much of what we do is variable, left to our discretion, or subjectively interpreted. Should I run an erratum because that author is asking me to or will my journal look bad as a result? Am I ethically obligated to verify a copy of that permission form uploaded to the journal’s submission site? Has an author ever tried to access a full-text article, failed, then scanned the abstract and cited it, not really knowing much about its full contents but taking it at the abstract’s value?
It’s easy to understand how white lies happen but that’s no excuse, especially for publicly traded companies. When Mark Zuckerberg, Facebook’s CEO, took the stand, he espoused what he did or didn’t do to protect all of our private information from being sold at auction. We want to believe him, but at gut level, most of us suspect that there is a small technicality that makes his “truth” feel more like a lie.
How much of what we do — or don’t do — in our daily jobs affects science and medicine globally in healthcare publishing? For “authors” like me who occasionally write an opinion piece, my successes and failures are not likely to move the needle too far in any direction. But consider academicians, researchers, or scholars who have made it their life’s work to create scholarly, influential outputs. It matters. Fake news may sell but it doesn’t fly in science and medicine. Eventually most offenders get caught. Or, so we like to think.
Everyday my inbox delivers notes on shamed authors whose work has been retracted, offering one-liner nuggets about their “crimes.” They are eye-catching, no doubt. Authors are stealing articles. Peer reviewers are faking comments and using faux accounts to further their reputations. Images are being doctored — or worse yet — stolen and repurposed as one’s own. Just say the word Photoshop and you might as well be guilty.
Thirteen years ago, an article published in Nature Materials warned that we were in trouble — the sustainability of scientific publishing as we know it was in jeopardy. Salami slicing was relatively new then, but today we’ve all seen, read, or peer-reviewed salami-sliced articles in which one study is chopped up into multiple submissions, allowing the author more entries on their CV. The COPE website contains salami publication/submission examples dating back to 1998. They didn’t begin using the term ‘salami slicing’ until 2012, which feels like about 5 minutes ago. Was this the beginning of a new ethical downslide?
Where is the moral compass that should be steering us all in the right direction and how have we lost control so completely in such a short span of time? Have we grown naïve or conditioned to accept new ethical assaults that recur until they’re so familiar we fail to notice? Nobel Laureates have gone from renowned to infamous with one stroke of the pen, or iPhone swipe. Some countries overseas have become notorious for salami-slicing articles and submitting them to journals, hoping editors don’t look too closely. Add to a modern-day editor’s resume the need to manage language barriers and dialect issues, new detection tools, and to hire psychosocial managers to help keep everything above board, and someone doing an editor’s job 20 years ago would be virtually unrecognizable to their modern counterpart.
Are authors who’ve been burned and shamed by retractions, more likely to tempt fate a second time? Two researchers from Japan found that those with retractions under their belt are more like to commit misconduct again because of the Power Law — a mathematical model that applies to various physical, biological, and social phenomena distributed over broad magnitudes including earthquakes, moon craters, citation of scientific literatures, etc. Their work was prompted by the RetractionWatch “leaderboard” showing 3 Japanese authors in the top 10 list. Here they said, “You found that 3-5% of authors with one retraction had to retract another paper within the next five years — but among those with at least five retractions, the odds of having to retract another paper within the same time period rose to 26–37%.” It’s a staggering number.
Pile on another shade of grey — citation stacking: asking reviewers or other authors to cite specific work. What hue is this and where is the line (read: white lie) between acceptable (i.e., the work is relevant and adds to the manuscript) versus unethical (i.e., an author or editor is trying to stack the deck to improve its image or Impact Factor)? Journals have been making these “gentle recommendations” for years, but does that excuse it or make it okay? There’s no publishing bible for any of this and as mentioned above, we rely a lot on the morality, ethics, and good judgement of our editorial teams.
Just as we use programs such as Crossref’s Similarity Check to scan for plagiarism, should we be scanning every figure, image, and table submitted, also looking for manipulation, plagiarism, or duplicate publication? Should we create a job description solely for the person who does this? I’m thinking Editorial Ethics Coordinator or Publishing Crime Scanner. How are staff supposed to spend all their time looking for bad guys when we’re supposed to be focused on publishing for the good guys?
In academia, it is commonplace for a senior author to delegate work to a junior author or researcher; and sometimes, they learn the work had been compromised. Is the senior author complicit or negligent, despite how acceptable a practice is?
As we all know, Clarivate Analytics can suppress journals that game the system through tricks such as high degrees of self-citations or other unethical behavior. But why does the threshold for disciplining a journal seem so unreasonably high? And what effect does that have on authors who’ve published in those de-listed or suppressed journals? RetractionWatch covered the topic recently here. It seems to reason that authors’ work — and by extension their reputations — suffer when they publish in predatory journals and/or those that are de-listed or suppressed. Some may be gaming the system while others are caught in it inadvertently, even though the consequences may be the same.
There seems to have been a proliferation of remedies that have evolved from the standard erratum and retraction options. Within the past 10 years, a “Statement of Concern” was introduced and now we’re seeing the “retract and replace” method along with the National Library of Medicine’s shade of grey tag called “corrected and republished.” How is the average reader supposed to decipher the nuances of these levels of “fixing?”
A “white lie” here and there may be necessary in the mind of some; but, they can pile up, and lead to accuracy questions for the academic literature. As scholarly publishing moves to a culture of accepting constant revision (and loads of versions housed in multiple places), what are our responsibilities as editors and publishers in making sure the reader gets the truth with each and every read?
I suspect there are little white lies that have, over time, become the norm. We have better tools to find the cheats, but also to cheat the system. If we talk about the white lies, the short cuts, and the “one-time exceptions” to the rules as a community, we’re more likely to know what happens behind the curtain — and we can help one another stay honest.