Finding willing academics to review a manuscript is getting more difficult for some journals. Unfortunately, this difficulty may also be negatively affecting the judgement of journal editors, a new study reports.

stressed businessman

The paper, “Difficulty of recruiting reviewers predicts review scores and editorial decisions at six journals of ecology and evolution”, was published in the October issue of Scientometrics. Its author, Charles Fox is a professor of evolutionary and behavioral ecology at the University of Kentucky and the current editor of Functional Ecology.

For his study, Fox analyzed peer review data on nearly 52 thousand reviews for 24 thousand research papers that were sent out for review in six separate journals.

He reports that the more difficulty editors had in obtaining reviews on a manuscript (measured by the proportion of review invitations that were accepted), the less likely the manuscript was invited for revision. Put another way, papers that were ultimately rejected experienced more difficulty finding willing reviewers.

Moreover, the difficulty editors experienced finding reviewers predicted manuscript scores. Less trouble: higher scores; more trouble: lower scores.

These results are not altogether surprising. Editors routinely send title, author, and abstract information to invited reviewers, who use that information before committing to review. As a reviewer, it’s not hard to glean from the abstract whether the paper is relevant, important, and ultimately worth your time.

Now, here is where the study gets interesting. The ultimate decision to accept or reject a paper was not fully explained by what reviewers thought of the manuscript. Even after controlling for reviewer assessment, recruitment difficulty still predicted editors’ decision. Fox postulates that declining to review a manuscript sends a subtle message to the editor that the paper is not worth publishing. The more reviewers who decline, the stronger the negative signal. He writes:

[E]ditors may be biased, although subtly, against papers for which they have difficulty recruiting reviewers, either because they believe that difficulty recruiting reviewers is more informative about manuscript quality than it actually is or because they become annoyed at or frustrated by such papers.

Fox has no data to test his annoyance/frustration hypothesis, although anecdotally, it seems to ring true. Contacted for comment on his paper, Mark Johnston, Editor-in-Chief of the journal GENETICS, offered this response:

Editors (myself included) do get frustrated when they have to send out more and more invitations to review, but my impression is that they put that frustration aside once the reviews are in hand. […] Of course editors are human, so it would not be surprising if they had some residual unconscious bias induced by the frustration they experienced, but if that’s the case it seems they do a pretty good job of squelching that because its effect is small.

While Fox studied 6 ecology and evolution journals, the results are likely generalizable across other disciplines. They may not be applicable to journals that reject a sizable proportion of papers without undertaking external peer review, however. For high-profile journals that only send out papers that stand a good chance of being published, recruiting suitable referees is usually not a problem.

Tim Vines, a former managing editor of ecology journals and someone who has done research on the peer review process, agrees. He writes:

[E]ditorial rejection is a key journal tool: papers that are unlikely to be accepted take up disproportionate amounts of editorial effort, so sending them back to the authors straight away saves everyone a lot of hassle.

For journals that do not require manuscripts to report novel or significant findings, locating a willing reviewer may be much more difficult. Commercial options, where authors pay to have their paper independently reviewed have failed to gain market acceptance, however. For the foreseeable future, frustration may continue serve as a signal for many journal editors.

Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist. https://phil-davis.com/

Discussion

13 Thoughts on "Difficulty In Finding Reviewers Taints Editorial Decisions"

An interesting paper surfaced this morning that may support the notion that editorial screening leads to better peer review, with a simulation of review suggesting that “a peer review system with an active editor—that is, one who uses desk rejection before review and does not rely strictly on reviewer votes to make decisions—can mitigate some of these [negative] effects.”

https://www.cambridge.org/core/journals/ps-political-science-and-politics/article/does-peer-review-identify-the-best-papers-a-simulation-study-of-editors-reviewers-and-the-scientific-publication-process/B043BF82D20C1CB153446B7F07E3518D

This is getting to be a real problem at least in my limited personal experience. In several cases I know of, after not being able to locate at least two suitable reviewers, the editor simply informed the author(s) they could not review the paper due to a lack of reviewers.

I resigned from acting as a review editor on a couple journals for several reasons but to a large extent because it as gotten so hard to find reviewers. You can only call in so many favors from friends and trying to get qualified reviewers you don’t have a personal relationship is virtually impossible.

I don’t know what the answer is but there seems we have lost the sense of community where researchers who published in a journal felt an obligation to review for the journal. Getting bombarded by requests to review from journals you have never heard of and often aren’t even in your field doesn’t help.

I do wonder how much work the editors of OA megajournals experience in securing willing reviewers. While these titles can promise fast publication—if they don’t require authors to undertake significant revisions to their manuscript—finding volunteers to review these papers in the first place may be harder than initially considered.

As the editor of such a journal, I can indeed confirm that it’s a lot of work and it’s the single most important issue I face. In addition, we have the problem that as a new journal, it’s mostly unknown, it’s not indexed and doesn’t have an IF, so that also makes things very complicated

C’EST LA MEME CHOSE

I suspect this is something that does not affect researchers in the centres of their fields. It is the outliers, some of whom have really novel stuff to communicate, who are penalized (and hence are we all). Nothing new here. Samuel Butler in the nineteenth century was way ahead of the Darwinians in thinking about biological problems in informational terms. This approach did not get much traction until the 1940s when his ideas, arriving by a tortuous route, were widely communicated by Erwin Schrodinger. Butler wrote four books on evolution that he had to self-publish. Attacked by the Darwinians, sales were poor and did not cover losses.

I have seen two posters related to this topic at conferences this year:

Shideler and Araujo, “After seven reviewer invitations, you may not see many citations. Sad!” at CSE, and

Overstreet et al., “Exploring whether a manuscript should be rejected if it proves difficult to secure reviewers” at ISMTE.

This may speak to the issue of increasing the pool of reviewers beyond the obvious suspects. There is a host of potential reviewers if the editors would look beyond their shores. There is a lot of discussion about diversity and inclusion in the scholarly publishing industry and it appears that the industry is poised to take several good steps forward to develop bridges for women and minorities to join in the scholarly publishing industry. Secondly there are new tools to assist the editor to not only identify the best scholars to review a paper but tools to assist the peer reviewers to determine the efficacy of a paper. If the industry would whole heartedly embrace diversity and peer review analytical tools it will no doubt provide us with more speed and accuracy in reviewing papers.

Could you please share what some of these “peer review analytical tools” are?

Finding reviewers is top on just about every editor’s list of problems. That said, with the exception of really novel work for which there may not be a lot of qualified reviewers, I think these problems are solvable. What we find at board meetings is that not all associate editors take the time to learn how to find reviewers already in the system. I can also complain here that the systems available don’t go out of their way to make it easy. When we show editors how to find people in the system, there is always a big “aha” moment. I also believe that some editors find it “easier” to search by name for someone they know. Given that the number of people they know and remember may be limited, they select the same people all the time. Huge portions of reviewer databases go unused.

I also find that there is no good way for a willing party to come in as a reviewer. If someone calls, emails, or approaches an editor at a technical conference and says they would like to be a reviewer, what do we do with them? They can create a reviewer account and go straight into the pool with the tens of thousands of other reviewers. I tell them to send the editor a note explaining their area of expertise but there is still no guarantee that this person will be selected if no one on the editorial board knows them.

Authors are also not serving as reviewers. The overlap is way smaller than I usually think it is. Some journals have policies that if you get accepted, you need to do at least one review in the next year but having the resources to track that and enforce the policy is difficult.

I do think that editors find the papers that are hard to review as the sticky wheel. I have certainly had editors ask if they can reject a paper on the basis that after inviting 7-10 people, they can’t get anyone to commit. That’s not a very nice message to send the authors so instead we suggest they ask the author to suggest some names and then we can vet that list and see what works.

This study mentioned here is important and one I will share with my editors.

Would you see value in a “call for reviewers” similar to the way journals may “call for papers”?

Somewhat unrelated, does the inability/difficulty of a journal to find reviewers (or to find reviewers who decline to review) have something to do with the scope of the editorial team, and by extension, that “iteration” of the journal? Could this be mitigated by a refocusing of the scope of the journal, and is this something authors even consider before submitting?

As a former editor of two sociology journals, I would endorse the study cited by Kent Anderson. Nothing uses up reviewer goodwill more quickly than being sent papers that had no reasonable prospect of becoming publishable. Editors should probably be doing more desk rejection than they do. Where I had difficulty in recruiting reviewers, this was often a tacit message that the paper was out of scope in the eyes of the community. This was trickier to resolve because I sometimes thought we should publish papers that pushed those boundaries or brought work to the attention of the community that they might not otherwise encounter. Sometimes an editor ought to challenge a consensus. That’s usually when I called in favors. I know two of my own papers benefited from other editors taking a similar position and you might see it as a kind of payback.

Phil – The same goes for tenure and promotion committees in many places in academia. However I would replace the word “taint” in your headline with “guide”, in my experience as an author, reviewer, and editor/editorial board member, interesting and even controversial manuscripts find reviewers. Academics are invariably curious about their field, so will usually be glad to review stuff which looks new and interesting. Manuscripts where the advance is incremental or unclear will struggle to find reviewers, and I think this is a valid point for editors to consider. Obviously this penalizes those who obscure their findings behind a veil of poor writing, but c’est la vie…

It seems possible to me that one explanation for the purported phenomenon would be that, as an editor sends out a manuscript for review repeatedly, the further candidate reviewers’ core competences are likely to be from the topic of the manuscript; if this were true, then (it again seems possible, to me) eventual reviews from peer reviewers who were preceded by many refusals would tend to give a (more or less obviously) uninformed recommendation, unaccompanied by much detailed commentary on the manuscript. I suspect that many editors would weight a negative recommendation of that sort more highly than a positive one.

I have never been a journal editor, but I have been a peer reviewer (off and on) for over 40 years. My own policies have always been to refuse a request to referee something that is outside my (fairly deep, but quite limited) core competency, but to give very detailed comments when I do accept a request. Judging from referee’s reports on my own papers, those policies are not universal…

Oh, and I’m a mathematician. The situation in other fields may be entirely unlike (what I perceive to be) the situation in mathematics.

Comments are closed.