I was scrolling through Twitter with my morning coffee in my favorite place — my screened-in porch — and there were two threads that caught my eye:

The first was about a procrastination method applied to conducting peer review. The author of the tweet talked about carrying the paper everywhere for a while to “feel its presence” followed by staring “longingly” at the paper from afar. The process then involves picking up the paper from time to time to read the first sentence. This comes to an inevitable end when the reviewer gets an email from the editor and voila, the review in completed and submitted within 3 hours.

people looking at each other across a gap

What follows this tweet were anecdotes from other academics about their methods of procrastination when it comes to completing peer review.

The second tweet over coffee was complaining about only getting 3 weeks to review a “long, complex” article. The reviewer estimated that doing this review would take him two working days and he would need to reserve a whole weekend for this. It ended with “get real, editors.”

Both of these threads smacked of the disconnect between what authors value (fast time to publication) and what reviewers need (flexibility and time).

Author Expectations

There have been plenty of author surveys conducted regarding the value of faster times to publication.

A large global survey done by Editage found that 43% of authors surveyed said that ideally it would take less than 3 months to get a paper published. 69% said less than 6 months. When the same survey asked what was most important to authors in selecting a journal, fast time to publication was the third most important factor after journal Impact Factor (1) and scope of the journal (2).

Authors were also asked what improvements could be made to the publishing process. Improving the speed of publication was first followed by improving the overall peer review process and quality of reviews.

A different survey was conducted with authors of megajournals — PLOS One, PeerJ, BMJ Open, and Sage Open. This survey also asked authors what was most important to them in selecting a journal. Journal quality was rated highest, followed by “speed of review and publication process.”

The 2011 Peer Review Survey done by Sense about Science found that 42% of respondents found the peer review process to be slow or very slow.

Lastly, the Publons Global State of Peer Review Survey found that 29.1% of respondents were dissatisfied or very dissatisfied with the time to publication with 28.1% neutral and 42.9% satisfied or very satisfied.

These are just a smattering of survey data available, but it illustrates the importance of speed in publication from the author’s perspective.

Reviewer Realities

What about reviewers? Looking back at the second tweet lamenting 3 weeks to review a long a paper, it seems speeding the publication process is not a priority for those conducting the reviews. There have been some excellent surveys done of peer reviewers, but few ask the question about the amount of time needed to complete a review. Still, here are some numbers:

The Sense about Science survey did ask reviewers to share some data about their reviewing habits. Of those surveyed, 14% said they typically return a review within 1 week, 31% said 2 weeks, 23% said 3 weeks, and 18% said 4 weeks. Note that this is reviewer reported times, not data based on actual behavior.

Further, 57% said they always return reviews in on time. Only 11% admit to never returning their reviews in on time.

This survey also asked reviewers why they decline reviewer invitations. 12% said that the amount of time given to review was too short.

The Publons report collected data from the ScholarOne manuscript submission system to see just how much time is taken to complete a review.

Publons reports that the median time from invitation to the submission of a completed review is 16.4 days (19.1 mean). They estimate that actually reading the paper and completing the review takes on average 5 hours. The Sense about Science survey reported 6 hours.

Let’s take a step back and summarize the landscape of data here.

  1. Millions of hours are spent by scholars peer reviewing the work of other scholars
  2. Peer review as an activity rates as extremely important to scholars in all surveys
  3. Most scholars claim that their work has been improved by peer review
  4. Speed of publication is extremely important to authors
  5. Finding good reviewers is getting more difficult

I know what you are thinking — aren’t authors and reviewers the same group of people? Yes and no. I don’t have a lot of data to point to but Springer estimated that the overlap was about two-thirds. Anecdotally, journals have been reporting anywhere from 50-70% overlap. Still not too bad.

Why are scholars showing dissatisfaction with the review process when there is so much overlap between the communities?

Part of this disconnect may lie in the ambiguities around the process. Authors often see status updates that say “Under Review,” which can include a whole host of activities. Also, the time from reviewer invitation to decision letter to author is only as quick as the slowest reviewer. If the editor requires three reviews and one reviewer was late or requested an extension, the author must wait for the decision letter.

The worst is when a reviewer never returns a review they agreed to complete. In this instance, the editor doesn’t even know they need to find another reviewer until the review is already late and the process starts from the beginning. This wastes everyone’s time.

Solutions to Consider

So how do we bridge this gap between what authors expect and what reviewers can provide?

The Publons report found that there is an imbalance between who produces papers and who reviews papers. This will not be a surprise to many journal editors. The sharp increase in submissions across the board, has come from China. However, journals are not using reviewers from China. Scholars in the US contribute 32.9% of all reviews compared to 25.4% of published article output. Conversely, China contributes 8.8% of the reviews with article output of 13.8%.

There is interesting data in the report showing that editors predominantly choose to use reviewers from their own region and editors are predominantly from the US and Europe. The output from China will continue to grow and more attention needs to be paid to including Chinese scholars in the peer review process. Author-focused workshops and trainings conducted in China, India, South America, Africa, etc., should include reviewer training as well.

Training of reviewers is always a hot topic. All of the surveys mentioned in this post have an aspect of reviewer training opportunities and overwhelmingly, scholars say they want training. There are several publishers and societies that offer robust training programs online. I would love to know what the uptake looks like. I predict that training is something scholars might say they want on a survey, but actually finding the time to do some online training may be difficult.

It does strike me that part of reviewer training needs to be a frank discussion about saying no to a review request. Life happens and the best of intentions can sometimes go awry. That said, the 260 million researcher days spent waiting for peer review to be completed deserves some attention. Reviewer instructions and invitations should make it explicitly clear that if at any time, the reviewer cannot complete the review, they should inform the editor immediately.

Setting expectations for authors is also important. Advertising the average time to first decision while not discounting those papers desk rejected (not reviewed by outsiders) does not paint an accurate picture for authors.

An audit of the review process should be completed every year or so. At ASCE, we overhauled our reminders and cut the average time to review significantly (hint: send pre-reminders to reviewers). Detailed dashboards or reports from the submission and review systems can also help with identifying the stragglers.

Journals should also streamline the reviewer process and make sure that the instructions are clear on what is expected and what they should skip.

Finally, submission systems need to make peer review easier. A reviewer should be able to read and comment on a paper on their tablet, while commuting on a train, or flying to a conference (if they so choose). Reading a PDF and then compiling the feedback line-by-line is unhelpful.

There are steps that could be taken to close the gap of expectations. By widening the reviewer pool and training underutilized resources, the burden of review is distributed. At the same time, educating authors on the real-time expectations for a process they value, is in important step in encouraging patience.

Angela Cochran

Angela Cochran

Angela Cochran is Vice President of Publishing at the American Society of Clinical Oncology. She is past president of the Society for Scholarly Publishing and of the Council of Science Editors. Views on TSK are her own.

Discussion

10 Thoughts on "Balancing Author Satisfaction with Reviewer Needs"

I am a frequent reviewer and former editor. Most papers that I agree to review are complex but always within my area of expertise. Some take an hour and some take up to four hours, so I could never figure out the need for weeks or months. When I agree to review, which is limited these days to journals on which I am on the editorial board, I am careful to set a calendar reminder one week and one day before the deadline and then clear the deck when I see it, forcing other tasks to wait. Surely I can find 1-4 hours in that week, especially if I refrain from picky wordsmithing. So, if I’m given six weeks, the first five weeks are wasted. I believe many reviewers could act within a month if it were not for their inflated sense of being too busy. With rare exceptions, it’s a needless ego trip to claim being so buried in work. The rest of us know how rarely you are in the office. If you don’t want to review, then tell the truth (“I am not interested”) rather than lie about being too busy. The editors learn how unbusy you really are when they read your cursory reviews.

Oh boy, I am so busy these days I can’t even comment on blogs, let alone review papers.

Most of the time I have a backlog of papers to review, usually about 3-4. So, new review requests are getting into the queue. This explains a lot of the time it takes till a paper is reviewed. Occasionally I do end up with no reviews backlog but then I am more likely to accept a new one, so usually there is a backlog. And I reject most review requests…

This is a really useful synthesis Angela, thank you. One thing I have noticed as a reviewer is that I’m increasingly “asked” to review something by a notice that an account has “been created for you” on XYZ platform. This is the kind of message that goes straight to trash as likely phishing. The second message is then from the editor with the actual ask. Sometimes leading me to dig the invite out of the trash. But, let me also mention that I’m really disinclined to create an account just to trun down the review invite (I usually just reply by email then but I suspect many don’t bother). Anyway … just some additional anecdotes to encourage journal editors/publishers to map out the user experience of the messaging on the reviewer side and where bottlenecks are occurring.

Over 28 years of involvement in the peer-review process I have often noted the disconnect between what researchers expect when they are the authors and what they expect when they are the reviewers.

Authors always expect their papers to be viewed as groundbreaking, transformative works that the reviewers should feel honored to have been consulted on and which they should drop everything to read and provide comments on immediately.

Reviewers usually expect to be given more time than allotted when they agreed to provide a review and think nothing of withdrawing from the review process when told that their review is already two weeks over due because they are on vacation or holiday, as if this wasn’t something they had planned before they accepted the review request. Don’t even get me started on the whole month-long national holiday issue.

And these are in fact the same group of people far more than 50% of the time (I really don’t understand why we would ask people not actively working and publishing in the field to comment on new research). China may account for a low percentage of reviews submitted compared to the papers produced, but in my experience it also accounts for a very high percentage of those who agree to review a paper and then never do. You can’t blame the editors for not using reviews they never get.

That’s where a really good manuscript tracking system comes in; one that allows me to easily look at the review activity of an author complaining about the amount of time the review process is taking on their latest paper. All too often I can point out that they themselves dragged their feet on providing a review on another author’s paper, even complaining that three weeks was not long enough to read a paper when they had already taken five weeks and hadn’t even looked at the paper yet (yes, that data is also collected!)

If there is a single issue in the area of reviewer training which would be beneficial to the entire enterprise, it would be making it clear that reviewing a paper is a contribution to the scholarly community, not just something to put on one’s CV and claim credit for on Publons.

As an author I’m expecting fast review so when I’m reviewing I’m trying to do my best to do it ASAP. Sending it in few weeks is disrespectful for others.
Also I think that every review should be rewarded with small discount token as some publishers do

“Reviewer instructions and invitations should make it explicitly clear that if at any time, the reviewer cannot complete the review, they should inform the editor immediately.”

Thanks for the great tip! I just added wording reflecting this advice to our reviewer reminder emails, emphasizing concern for the author’s position. Most reviewers are courteous and let us know if they will be late or need to withdraw, but it will be interesting to see if adding this text changes the behavior of those who leave us hanging.

This is a terrific article – thank you. I work for a publisher specialising in academic monographs/collections, and peer review is an article of faith for us; but the experience of trying to extract a review from someone who has happily agreed to take it on in the first place can be highly frustrating – as the editor you are too often stuck between a reviewer who won’t/can’t complete the task but won’t relinquish it, and the poor author who needs that review for their book to move forward! If in doubt as to an ability to provide it in any sort of timely fashion, I feel it’s best if reviewers say no in the first place…

I have no doubt, as a former editor and a current author and reviewer, that there is a problem about matching expectations, which is not being well handled at present. I now find journals asking for faster turnarounds, which makes me more likely to decline straight off, which adds to the editor’s problems in finding reviewers. If I am offered less than six weeks, I will decline a request immediately. I make a point of delivering within this time frame but I will not change my short-term work plan to accommodate an unexpected interruption. In the last few months, I have also faced something of a barrage, which seems to result from editors thinking “Oh. he’s emeritus so he’ll have time to do it!”. Actually it doesn’t work like that. Apart from some (short-term) personal health issues, the point about being emeritus is precisely being able to spend more time with gardening, grandchildren, bucket list vacation weekends, etc. If you want me to do your review, you have to pitch it as a personal favour rather than as a professional obligation. Given the demographic profile of the academic profession, we need to find ways of encouraging people who are in post to do more rather than assuming that the growing pool of emeriti can meet the need as anything more than a stop-gap.

I believe that the addition of an actual submission checklist vs. multiple clicks and sections online or dozens of PDFs can be the best way to get authors to overcome errors that keep their paper from being further considered or stuck in the pipeline. Also, I think journal editors should conduct a preliminary review of papers regarding protocols so that subject matter experts do not have to troll through poor English or badly presented data which is cumbersome. First eliminator–do they meet the requested standards? If not, REJECT on that basis and this will help peer reviewers such as myself address content vs. aesthetics. Second, each journal should send a matrix or checklist, not just a list of questions about is the paper interesting? Are the references solid? etc. For example, tell me the abstract style and verify that this paper is a research paper, literature review. etc. Then tell me what your guidelines are regarding this type of paper. If I have to spend more time reviewing your basic requirements to ferret out submission guidelines, then I have less time on content. In summary, both the peer reviewer and author would benefit from one master matrix that each journal considers relevant and assign points to these factors. For example, novel research that is verified with documentation on systematic literature search and the key words that were spelled out in methodology could get a bonus 10 points. Some journals actually list what content should be in the introduction. For example, Wiley generally recommends the following: A well-written introduction (Wiley Publishing):
• Sets out the argument
• Summarizes recent research related to the topic
• Highlights gaps in current understanding or conflicts in current knowledge
• Establishes the originality of the research aims by demonstrating the need for investigations in the topic area
• Gives a clear idea of the target readership, why the research was carried out and the novelty and topicality of the manuscript

Another example is the type of citation style also dictates the content of certain sections. A well-written abstract (APA):
These are the 5 major components of an abstract:
1. Problem/purpose/objective/research question/focus of study,
2. Sample/population size/characteristics,
3. Method (e.g., data-gathering procedures, intervention, research design),
4. Findings, and
5. Conclusions/implications/recommendations.

When I read a document, does the citation style take precedent or the publishing recommendation for these required components of submission? Often I don’t know and much of my time is consumed on things that should be easily provided. In fact, I am not convinced that publishers are clear on citation style/publishing protocols. Further, a publishing/journal matrix that indicate scope is required and it is missing would tell me this paper, say, loses 5 points. If your abstract is missing the methodology, then you lose another 5 points or whatever numbers you assign for each section. In the end, the assessment threshold should be clear. If you get < 70 points, you receive a rejection for multiple deficiencies. If you get 71-80, resubmit after major revisions ; 81-90 resubmit after minor revisions; 91-100 gets accept with revisions as specifically noted. This makes the decision by the peer-reviewer less arbitrary and speaks to the author regarding missing parameters. I believe this approach would standardize the peer-review process and enhance the information provided to authors for corrective action. There is nothing worse than receiving a paper that does not provide some information and direction. After all, peer-reviewers should help to promote new research skills. At the same time, authors must learn the process of submitting using journal protocols. Asking a SME to read a paper that does not contain proper English, formatting, etc. is a waste of time and could easily be eliminated from the process for this reason. Personally, major errors like this distract so much from content that if they are not addressed, otherwise promising papers may be rejected. This is a fact! Let's do something to enhance the process. Thanks for listening.

Comments are closed.