One afternoon in the men’s shower room of a Cornell pool, I was at the receiving end of a long rant by a researcher whose manuscript was rejected by a prominent journal in his field.
“The editor is getting too old. He should retire!” was one declaration. “The reviewer didn’t understand the science!” was another. And, chalking it up to petty rivalry: “I’m sure one of the reviewers rejected my paper because he knew it came from my lab. I’ll never submit to this journal again,” he concluded, “It was a waste of time.”
At one point, I offered up the possibility that perhaps the manuscript wasn’t his lab’s best work. And, if indeed, the first journal missed a great manuscript, there must be several other journal editors who would would love to see it. Unfortunately, this just led to another rant about the problems in journal publishing, why the system is completely broken, and how it needs to be fixed.
Many readers of this blog have been privy to interactions like this. Editors attempt to make the best possible decisions with never enough information. And yet, with a limited group of experts, each of whom may harbor deep biases of their own, we have something that works. Still, it could be better.
I worked recently with a medical journal publisher that wanted to improve the author experience. For the last three years, this publisher has been surveying their corresponding authors–those whose manuscripts were rejected as well as accepted–to understand how they could improve the submission, review, and publication process. The publisher had made some editorial policy changes over the past year–most importantly to restrict the number of additional experimentation and revisions required of its authors–and wanted to know if these changes made any difference to the author experience. As you can see in the first Figure, it doesn’t look like their policy changes made much of a difference.
Break the respondents down into authors whose manuscript was accepted (blue) and rejected (red) and you’ll notice a great schism in author responses (Figure 2). Not only did rejected authors believe that the editorial board failed to understand their work, but peer reviewers — supposed experts in their field — failed to understand it as well. In the minds of rejected authors, the editorial board did not properly weigh the reviewers’ comments and ultimately made decisions that were not based on scientific grounds. Not surprisingly, rejected authors were much less likely to believe they would ever submit again to this journal. In contrast, accepted authors were resoundingly supportive of nearly every aspect of the journal.
What does this survey tell us about the perception of authors?
First, I think it indicates that the perceived competency of editorial boards and their ability to objectively and scientifically evaluate the quality of a manuscript is largely determined by a single accept/reject decision. While corresponding authors should act as partisan supporters of their own work, a perception of unfairness in the editorial and review process weighs heavily on their overall experience. Second, it confirms that the notion of “scientifically sound” isn’t a quality that can be evaluated objectively with a yes/no decision, but is something that is highly subjective and contextual.
For journals that wish to be selective in what they publish, rejection is just part of the process. Unfortunately, many journals send out form letters to rejected authors, conveying a sense that their papers may not be worth the time it takes to write an original correspondence. The rejection rate of the journal is often stated in these form letters, implying that publication is more of a lottery than the result of careful selection.
In addition, some editors do a poor job editing reviews to ensure that they are written politely and contain constructive remarks. Dismissive, sarcastic, and sexist remarks can often make it back to authors when they are at their most sensitive and emotionally vulnerable. At its worst, a single remark can create a major public relations disaster for a publisher.
I don’t know whether this Cornell researcher would have reacted any differently had his rejection been communicated any other way. Would addressing the emotional state of the rejected author help in any way (“I know you may be feeling very angry and frustrated by our decision”), or merely prime authors to feel that outrage is the normal response to rejection? Does offering a consolation prize (“We’d be happy to consider your manuscript for publication in another [viz. lower-prestige] journal we publish”) help with rejection, or just rub salt into the wound? And lastly, is it worth the time to write individual rejection letters, or does it simply invoke authors to write appeals?
While I don’t have a remedy for healing the emotional harm that can be done in the rejection process (other than ensuring that rejected authors are treated with respect and insulated from insensitive comments from reviewers), I don’t believe this topic has received sufficient attention when discussing the experience of authors. All of the surveys I’ve seen have been limited to the experiences of accepted and published authors–a response set that may be just as equally clouded and biased.