I was scrolling through Twitter with my morning coffee in my favorite place — my screened-in porch — and there were two threads that caught my eye:
The first was about a procrastination method applied to conducting peer review. The author of the tweet talked about carrying the paper everywhere for a while to “feel its presence” followed by staring “longingly” at the paper from afar. The process then involves picking up the paper from time to time to read the first sentence. This comes to an inevitable end when the reviewer gets an email from the editor and voila, the review in completed and submitted within 3 hours.
What follows this tweet were anecdotes from other academics about their methods of procrastination when it comes to completing peer review.
The second tweet over coffee was complaining about only getting 3 weeks to review a “long, complex” article. The reviewer estimated that doing this review would take him two working days and he would need to reserve a whole weekend for this. It ended with “get real, editors.”
Both of these threads smacked of the disconnect between what authors value (fast time to publication) and what reviewers need (flexibility and time).
There have been plenty of author surveys conducted regarding the value of faster times to publication.
A large global survey done by Editage found that 43% of authors surveyed said that ideally it would take less than 3 months to get a paper published. 69% said less than 6 months. When the same survey asked what was most important to authors in selecting a journal, fast time to publication was the third most important factor after journal Impact Factor (1) and scope of the journal (2).
Authors were also asked what improvements could be made to the publishing process. Improving the speed of publication was first followed by improving the overall peer review process and quality of reviews.
A different survey was conducted with authors of megajournals — PLOS One, PeerJ, BMJ Open, and Sage Open. This survey also asked authors what was most important to them in selecting a journal. Journal quality was rated highest, followed by “speed of review and publication process.”
Lastly, the Publons Global State of Peer Review Survey found that 29.1% of respondents were dissatisfied or very dissatisfied with the time to publication with 28.1% neutral and 42.9% satisfied or very satisfied.
These are just a smattering of survey data available, but it illustrates the importance of speed in publication from the author’s perspective.
What about reviewers? Looking back at the second tweet lamenting 3 weeks to review a long a paper, it seems speeding the publication process is not a priority for those conducting the reviews. There have been some excellent surveys done of peer reviewers, but few ask the question about the amount of time needed to complete a review. Still, here are some numbers:
The Sense about Science survey did ask reviewers to share some data about their reviewing habits. Of those surveyed, 14% said they typically return a review within 1 week, 31% said 2 weeks, 23% said 3 weeks, and 18% said 4 weeks. Note that this is reviewer reported times, not data based on actual behavior.
Further, 57% said they always return reviews in on time. Only 11% admit to never returning their reviews in on time.
This survey also asked reviewers why they decline reviewer invitations. 12% said that the amount of time given to review was too short.
The Publons report collected data from the ScholarOne manuscript submission system to see just how much time is taken to complete a review.
Publons reports that the median time from invitation to the submission of a completed review is 16.4 days (19.1 mean). They estimate that actually reading the paper and completing the review takes on average 5 hours. The Sense about Science survey reported 6 hours.
Let’s take a step back and summarize the landscape of data here.
- Millions of hours are spent by scholars peer reviewing the work of other scholars
- Peer review as an activity rates as extremely important to scholars in all surveys
- Most scholars claim that their work has been improved by peer review
- Speed of publication is extremely important to authors
- Finding good reviewers is getting more difficult
I know what you are thinking — aren’t authors and reviewers the same group of people? Yes and no. I don’t have a lot of data to point to but Springer estimated that the overlap was about two-thirds. Anecdotally, journals have been reporting anywhere from 50-70% overlap. Still not too bad.
Why are scholars showing dissatisfaction with the review process when there is so much overlap between the communities?
Part of this disconnect may lie in the ambiguities around the process. Authors often see status updates that say “Under Review,” which can include a whole host of activities. Also, the time from reviewer invitation to decision letter to author is only as quick as the slowest reviewer. If the editor requires three reviews and one reviewer was late or requested an extension, the author must wait for the decision letter.
The worst is when a reviewer never returns a review they agreed to complete. In this instance, the editor doesn’t even know they need to find another reviewer until the review is already late and the process starts from the beginning. This wastes everyone’s time.
Solutions to Consider
So how do we bridge this gap between what authors expect and what reviewers can provide?
The Publons report found that there is an imbalance between who produces papers and who reviews papers. This will not be a surprise to many journal editors. The sharp increase in submissions across the board, has come from China. However, journals are not using reviewers from China. Scholars in the US contribute 32.9% of all reviews compared to 25.4% of published article output. Conversely, China contributes 8.8% of the reviews with article output of 13.8%.
There is interesting data in the report showing that editors predominantly choose to use reviewers from their own region and editors are predominantly from the US and Europe. The output from China will continue to grow and more attention needs to be paid to including Chinese scholars in the peer review process. Author-focused workshops and trainings conducted in China, India, South America, Africa, etc., should include reviewer training as well.
Training of reviewers is always a hot topic. All of the surveys mentioned in this post have an aspect of reviewer training opportunities and overwhelmingly, scholars say they want training. There are several publishers and societies that offer robust training programs online. I would love to know what the uptake looks like. I predict that training is something scholars might say they want on a survey, but actually finding the time to do some online training may be difficult.
It does strike me that part of reviewer training needs to be a frank discussion about saying no to a review request. Life happens and the best of intentions can sometimes go awry. That said, the 260 million researcher days spent waiting for peer review to be completed deserves some attention. Reviewer instructions and invitations should make it explicitly clear that if at any time, the reviewer cannot complete the review, they should inform the editor immediately.
Setting expectations for authors is also important. Advertising the average time to first decision while not discounting those papers desk rejected (not reviewed by outsiders) does not paint an accurate picture for authors.
An audit of the review process should be completed every year or so. At ASCE, we overhauled our reminders and cut the average time to review significantly (hint: send pre-reminders to reviewers). Detailed dashboards or reports from the submission and review systems can also help with identifying the stragglers.
Journals should also streamline the reviewer process and make sure that the instructions are clear on what is expected and what they should skip.
Finally, submission systems need to make peer review easier. A reviewer should be able to read and comment on a paper on their tablet, while commuting on a train, or flying to a conference (if they so choose). Reading a PDF and then compiling the feedback line-by-line is unhelpful.
There are steps that could be taken to close the gap of expectations. By widening the reviewer pool and training underutilized resources, the burden of review is distributed. At the same time, educating authors on the real-time expectations for a process they value, is in important step in encouraging patience.