Asleep at computer screen
Image via Wikimedia.

Without a doubt, the number one complaint at every editorial board meeting I attend is how to find good, qualified reviewers. As more and more manuscripts come in, the pool of tried and true reviewers is being bombarded with requests from multiple journals. Journals and books aren’t the only culprits. Many of the top experts being called on to review manuscripts are also being asked to review grant proposals. As has been discussed, a lot, this is an unpaid, extracurricular activity for which no real credit is given to the reviewer. Anecdotally, we hear that reviewer fatigue is the main issue with reviews not being completed or invitations being ignored or declined. A recent study took a look at what happens at one journal when a reviewer declines.

An editor at the American Political Science Review, Marijke Breuning, and the journal staff compiled reviewer data to determine why people decline to review and also if there are any differences in reviewer behavior by gender. In 2013, the journal sent out 4,563 requests to review either new submissions (96%) or revisions (4%). The good news was that 82.8% of reviewers responded one way or another to the invitation. In a close community, as you would find with a society journal, reviewers will have the courtesy of responding to their colleagues.

Almost 60% of the requests with responses were positive with a decline rate of 23%. As most journal editors can tell you, getting someone to accept the invitation to review is not the same as actually getting the review. Some of the reviewers in the study were dismissed because the number of required reviews was satisfied or they just never completed the review.

The authors used their manuscript submission system to analyze the reasons given for not accepting the invitation to review. They suspected that reviewer fatigue was to blame. Here is what they found:

  • Almost 29% of the reviewers gave no reason for declining the invitation to review
  • 25% said they were too busy but did not indicate that having lots of papers to review was the culprit
  • 14% did decline with an excuse that they had too many other review invitations
  • 33% of the reviewers declined for reasons such as “not an expert” (8%), “on leave” (3.3%), “already reviewed this paper” (3.2%), “university admin duties” (3%), etc.

Interestingly, 28 people declined and said they did so because they are editors at other journals. Mostly these editors referred to their workload with their own journal essentially taking themselves out of the “pool of experts.”

The authors saw no statistical difference in the behavior of male vs. female reviewers; however, the reasons for declining certainly gives us a glimpse that is familiar. More men declined for purposes of having administrative duties (such as department chair or dean) at a higher rate than women did. Likewise the bulk of the journal editors declining were men. Women were more likely to decline due to being on “personal leave or sabbatical” and “personal issues” such as family member’s illness. Also not surprising is that there were more women declining due to “maternity or paternity leave.”

There has been much debate about wasted reviewer time. A single paper could be reviewed by 6-10 (could be more) people at multiple journals before ever getting accepted. Without a doubt, we have more papers being submitted today than we did 10, even 5, years ago.

Every board meeting I attend has a discussion about reviewers—not fast enough, not responsive to the invitations, not writing quality reviews, refusing to re-review revised papers, etc. There is definitely a feeling that there aren’t enough “good” reviewers to go around. These anecdotes may not be telling the whole story.

Using data from our tracking systems is one way to determine whether reviewer fatigue is real and whether editorial boards are exacerbating the problem. Most, if not all, tracking systems have reports showing the number of reviewers used in a given time period compared to the number of reviewers in the database. Likewise, reports are available to see how many papers per reviewer are being assigned and a “top reviewer” report that shows how frequently the same people are used over and over again.

On occasion, an editor asks us if they can create a custom list of “preferred reviewers.” They understandably don’t want to sift through 10,000 names and they don’t want to take the time to look at the review history of each person before choosing them. But a preferred list is really only adding to the fatigue problem. Selecting 50 “go to” reviewers out of 10,000 means that you are burning out those 50 awesome reviewers.

Breuning et al. include some tips for avoiding reviewer fatigue:

  1. Search beyond the usual suspects. Many reviewers are invited based on their reputation or who the editor knows personally.
  2. First time reviewers are more likely to accept the invitation, even if they need a little coaching to get it done. Expanding your search for reviewers to databases of dissertations and conference programs may help.
  3. Personalize your invitation letters where possible.

In the end, only 14% of the reviewers who declined said that the reason was due to having too many review requests. Certainly there are other time-consuming pressures on the people journal editors count on as reviewers.

I would be very interested in seeing stats from other journals who may be collecting this kind of information. Assuming there is a problem is one thing. Identifying the problem with data provides a much better story.

Angela Cochran

Angela Cochran

Angela Cochran is Vice President of Publishing at the American Society of Clinical Oncology. She is past president of the Society for Scholarly Publishing and of the Council of Science Editors. Views on TSK are her own.

Discussion

27 Thoughts on "Is Reviewer Fatigue a Real Thing?"

Anecdotal only, I fear. As an associate editor for a few journals, and a reviewer for many more, I would confirm that most reviewers I ask say yes, and respond within the time set (usually 2-3 weeks, occasionally more). As a reviewer, I usually say yes. I get about 30-40 requests a year. I refuse perhaps 10-20% of requests. Why? There are two main reasons: a flood of requests coming in when I am stuck with deadlines for other things and because the paper is really outside my area of expertise. The latter is getting commoner, which suggests that editors are having difficulty in getting enough reviews. I admit to a slight prejudice against automated systems, but I applaud those journals that feedback their decision and also give me the anonymous reports of other reviewers. I would feel guilty about refusing to review a paper by authors whose first language was not English, or where it was obvious that the authors were at the beginning of their careers.
It is a different can of worms, but I think double blind reviewing actually does harm. I understand the reasons why some people advocate it, but (especially where i am critical) authors are entitled to know where I am coming from.

“Expanding your search for reviewers to databases of dissertations and conference programs may help.”

That will certainly NOT help, if your goal is to get knowledgeable reviewers that you can rely on. If your candidate reviewers do not have a credible publication record in the field (in which case they will be found in a publications search), they should not be reviewing manuscripts for credible journals

I would think that someone who has just done a thesis or presentation on the topic would be pretty well qualified to do a review.

I occasionally use grad students, post-docs, adjuncts and other non-tenure track people as reviewers and generally find that the quality of their reports exceeds what you would normally get from established “big names” in the field. You shouldn’t confuse someone’s age or place in the academic world as the ultimate indicator of their expertise or their ability to delivery a good reviewer report. Good reviewers, just like good authors, can come from anywhere.

Yes of course all the above can be excellent reviewers, IF they already have peer-reviewed publications in that field in journals that one respects. A non-published (and perhaps non-publishable…) dissertation or presentation is not sufficient evidence of that person’s competence to review, and if you are using such reviewers that is simply not fair to the authors.

I disagree with a standard that says only published authors can be reviewers. When someone does a Ph.D. thesis they may be the leading expert on their specific topic. Then too there are industrial experts who do not publish, etc.

I agree in principle, but I think students should be mentored by those with more experience to moderate their efforts. My early reviews were unnecessarily fierce, because I was nervous. I would generalise to saying that the first reviews should be a partnership with someone with experience (of being at the receiving end too!)

I disagree but I would not choose three reviewers for a paper that were all from this category. I attended a board meeting for one of our biggest journals earlier this year. The meeting was held at a related conference. For the first time ever, a handful of students came and asked if they could observe the meeting. They were invited to stay and some of them approached me about how they can be a reviewer. I encouraged them to create an account but also to write to the editor explaining their area of study and their interest in reviewing.

A good journal editor will see that he or she has an important role in developing the next generation of scholars for the betterment of the profession. Many students, at least in engineering programs, are writing series of journal articles and presenting those as their thesis. These papers, if accepted, add to their publication record and certainly make them qualified to review a paper–even if they need a little help in doing so.

” In 2013, the journal sent out 4,563 requests to review either new submissions (96%) or revisions (4%). The good news was that 82.8% of reviewers responded one way or another to the invitation. In a close community, as you would find with a society journal, reviewers will have the courtesy of responding to their colleagues.”

I think this last sentence is a little harsh!

The article doesn’t specify how the email addresses were acquired – it merely says “Requests to review are standardized e-mails sent through Editorial Manager”.
As I understand it, one way is by extracting data held in the editorial management system from previous submissions. Is there any validation / verification of these addresses, such as checking for accuracy? Do these systems track non-deliverable emails, or “bounces”?

I came across a study on emails of Corresponding Authors in the MEDLINE database. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1369259/
It found that over 24% e-mail accounts were invalid within a year of being published – I wonder what the rates are now!

There are a certain number of emails that go out into the ether and never meet the intended recipient. This is addressed in the study. I also use Editorial Manager and if the email is bad, we do get bouncebacks. We don’t have any idea if the email goes to a spam filter. If an editor really wants a specific person to review and that person has not responded, she or he should send a personal email outside the system.

The point I was making about society journals is that I would suspect that the communities around society journals are more intimate than the communities around commercial journals that are not affiliated with societies. There is a lot of cross over for sure but I find that loyalty to one’s main membership society is strong. I have no evidence to support this but would love to see some.

To your last point, ORCID is trying to solve the email problem from published papers. If an author changes affiliation and therefore email, the ORCID profile should be updated and include all related contact info. I can tell you that keeping a huge database of authors and reviewers up to date is next to impossible.

“Reviewer fatigue” is a loaded concept that reduces the peer review process to a market that can be explained simply by supply-demand. For the most part, peer review is still a voluntary transaction–although some companies are attempting to transform it into a financial transaction. Sending relevant, readable, and interesting (viz. novel) papers to reviewers is the best way of increasing reviewer acceptance. Leveraging a personal relationship or using the prestige of the journal are also ways of increasing acceptance. And rewarding reviewers–through public acknowledgement–can also help.

It would be a mistake for a journal editor to claim that reviewer fatigue is the culprit when none of the above strategies are employed.

The statement that reviewers are not paid is of course not true for reviewers of book manuscripts. Again, people writing for TSK are all too prone to make general statements that really pertain only to journals, not to books. It’s a bad habit that should be corrected. Of course, we call the payments to book reviewers “honoraria” for a reason: you are not going to get rich reviewing book manuscripts! By the way, most publishers I know offer a book reviewer the choice of a cash payment or double that amount in books from the publisher’s list, which is a good way to reduce excess inventory whose value may have been written off anyway.

Sandy, as I understand it the journal industry publishes something like two million articles a year. How does monograph publishing compare to this?

I wasn’t suggesting that journal article reviewers should be paid, just protesting the unqualified statement (in which books are mentioned) that all this reviewing activity is “unpaid.”

We looked at this between 2010 and 2014 for a number of our journals and have mostly seen a small decline in accept rates (though I don’t know if the changes are significant)

Journal A – 2010 accept rate 44.9% – 2014 accept rate 46.8% (change +1.9%)
Journal B – 2010 accept rate 44.4% – 2014 accept rate 36.9% (change -7.5%)
Journal C – 2010 accept rate 58.3% – 2014 accept rate 46.9% (change -11.4%)
Journal D – 2010 accept rate 41.7% – 2014 accept rate 38.5% (change -3.2%)
Journal E – 2010 accept rate 51.9% – 2014 accept rate 43.6% (change -8.3%)

We are trialing Publons as one possible way of addressing this.

We see the same pattern of declining reviewer agreement rate at our journal, and we’re also trialling Publons. We analysed the effect of the latter with a ‘before, after, control, impact’ design, where we expect to see an increase in reviewer agreement rate, but only for participating reviewers after the Publons trial began. We don’t have all that much data, but there is no effect so far.

We can count on you for a serious and scientific evaluation. If you are dealing with one journal, what is your control? Or, are you splitting your journal up into random blocks of papers, some of which will use Publons and others (the control) will not? And lastly, how are you measuring “impact”?

This comment provokes a third reason why I have turned down a request to review: where the Journal concerned has a very low acceptance rate (25% or less), and it amazes me that such journals can (a) reject a paper from me on the grounds that it is not “important” enough, while (b) regarding me as a suitable reviewer.

All this talk of databases suggests that not enough effort is going into finding reviewers whose present work is closely related to the article being reviewed. A while back I had an SBIR grant from DOE to develop a procedure for finding new reviewers for their grant proposals. The procedure worked well but it was labor intensive, because it involved looking at the candidate reviewers actual publications.

The British Journal of Educational Technology uses a different approach. It has a panel of over 3.000 volunteer reviews with about 500 or so active ones. The panel is sent a list of titles and abstracts of the papers received every 3 months or so, and members of the panel indicate which (up to three) papers they would like to review. More details of this system are available from me on request.

If you want something done ask a busy person the other kind has no time

Benjamin Franklin

Equally interesting is to ask the question the other way around: Why do people agree to review manuscripts for journals, even when they are already busy? Some of the reasons are likely to be less admirable than others. For example, would it not be reasonable to predict that many (most?) people (regardless of how busy they were) would agree to review, with eager anticipation, every time they are invited to review a paper that cites one’s own work favorably, or unfavorably? — or a paper that is supportive or critical of one’s most favorite theory, or least favorite theory? — or a paper authored by a research rival?
http://www.musingsone.com/2015/03/why-be-reviewer.html

It’s been a while since I’ve done a reviewer survey but when I asked this question–why do you review– the most common answer was to contribute to the profession. Second up was to pay it forward, meaning they publish and benefited from reviews and want to reciprocate. The third was to stay informed of advances in the field. These answers came from oncology folks, not engineers.

As a reviewer, one of my biggest issues is being asked to review poorly written manuscripts. With them, it takes too long to figure out what was done or found and feels like a waste of my time. I would appreciate it if editors skimmed manuscripts to make sure they are comprehensible before sending them out for review. (I’m sure that some do this.)

I’m sure another major reason for reviewer non-response is messages going to spam.

Not only do tracking systems offer robust reviewer reports, as Angela mentions, but Editorial Manager offers a Reviewer Discovery tool that puts the curated (by ProQuest’s Pivot) scholarly profiles and email addresses of over 3,500 potential matches at an editor’s fingertips. While it won’t cure reviewer fatigue, it could help add some new qualified names to the list, and take the burden off of some of those who are frequent asks.

I am not sure that 3,500 potential matches will fit at, or on, an editor’s fingertips. It sounds like the selection criteria are far too broad, which I suspect is also a problem with all simple database driven approaches.. This is why I developed a discovery algorithm that ranks candidates by conceptual closeness. What an editor needs initially is perhaps the ten best candidates. Then another ten, not quite so good, if needed, and so on.

Comments are closed.