Over the last several months, the ASCO editorial team has embarked on a deep dive of evaluating our flagship journal, the Journal of Clinical Oncology (JCO). With a new editor-in-chief and a new publishing partner, we are exploring all facets of the journal and developing a new 5-year strategic plan.
Part of this process has been to conduct a number of surveys. With the results of our author survey and our reviewer survey in hand, I was reminded of my post in 2019 on Balancing Author Satisfaction with Reviewer Needs, which we revisit below.
In this post, I shared data from large multidisciplinary surveys and was able to highlight areas in which needs of both the author and reviewer populations (which don’t overlap as much as we might think) are not being met.
Authors continually report that the turnaround time for peer review is an important factor in selecting a journal, though not the most important, and reviewers are pretty consistent in saying that they aren’t given enough time to complete reviews.
I have also written about reviewer fatigue and highlighted a survey that found that 14% of review declines were due to having too many other papers to review. Those of us who manage peer review know this too well. Editors have a pool of “go to” experts and tend to use those people more than others.
We also know from survey data in the post below that while English-language journals have seen huge increases in global submissions over the last 10 years, the pool of experts being used to review the literature largely remains with US and European-based reviewers.
What stood out to me in our JCO survey data is that our reviewer pool is largely the same as the reviewer pools of our competitors. Almost 52% of our reviewer respondents also review for Lancet Oncology, 46% JAMA Oncology, and 45% Journal of National Cancer Institute.
Oncology is not a particularly small discipline and there are certainly a lot of journals that cover the field. That said, there is little editorial board overlap (when looking at associate editors typically charged with assigning reviewers) among top journals. Still, regardless of how well an individual journal tries to not “burn out” a reviewer pool, it is impossible for us to know the workload of an individual across multiple journals.
This makes our work of diversifying the reviewer pool even more critical. Expert talent is not an endless resource and we must continue to find ways to bring in new people and balance the work more evenly. Collaborative efforts between journals in the same discipline to share reviews, reviewer training and mentoring programs to bring in new people, and technology tools for reviewer discovery are important initiatives. Further, journals would be wise to examine how their reviewers feel about the experience. Those journals that do better in matching reviewers to papers, provide excellent communication and customer service, and adequately recognize reviewers are likely to win out when two requests hit inboxes at the same time.
Balancing Author Satisfaction with Reviewer Needs
I was scrolling through Twitter with my morning coffee in my favorite place — my screened-in porch — and there were two threads that caught my eye:
The first was about a procrastination method applied to conducting peer review. The author of the tweet talked about carrying the paper everywhere for a while to “feel its presence” followed by staring “longingly” at the paper from afar. The process then involves picking up the paper from time to time to read the first sentence. This comes to an inevitable end when the reviewer gets an email from the editor and voila, the review is completed and submitted within 3 hours.
What followed this tweet were anecdotes from other academics about their methods of procrastination when it comes to completing peer review.
The second tweet over coffee was complaining about only getting three weeks to review a “long, complex” article. The reviewer estimated that doing this review would take him two working days and he would need to reserve a whole weekend for this. It ended with “get real, editors.”
Both of these threads smacked of the disconnect between what authors value (fast time to publication) and what reviewers need (flexibility and time).
There have been plenty of author surveys conducted regarding the value of faster times to publication.
A large global survey done by Editage found that 43% of authors surveyed said that ideally it would take less than 3 months to get a paper published. 69% said less than 6 months. When the same survey asked what was most important to authors in selecting a journal, fast time to publication was the third most important factor after journal Impact Factor (1) and scope of the journal (2).
Authors were also asked what improvements could be made to the publishing process. Improving the speed of publication was first followed by improving the overall peer review process and quality of reviews.
A different survey was conducted with authors of megajournals — PLOS One, PeerJ, BMJ Open, and Sage Open. This survey also asked authors what was most important to them in selecting a journal. Journal quality was rated highest, followed by “speed of review and publication process.”
Lastly, the Publons Global State of Peer Review Survey found that 29.1% of respondents were dissatisfied or very dissatisfied with the time to publication with 28.1% neutral and 42.9% satisfied or very satisfied.
These are just a smattering of survey data available, but it illustrates the importance of speed in publication from the author’s perspective.
What about reviewers? Looking back at the second tweet lamenting the inadequacy of three weeks to review a long a paper, it seems speeding the publication process is not a priority for those conducting the reviews. There have been some excellent surveys done of peer reviewers, but few ask questions about the amount of time needed to complete a review. Still, here are some numbers:
The Sense about Science survey did ask reviewers to share some data about their reviewing habits. Of those surveyed, 14% said they typically return a review within 1 week, 31% said 2 weeks, 23% said 3 weeks, and 18% said 4 weeks. Note that this is reviewer reported times, not data based on actual behavior.
Further, 57% said they always turn reviews in on time. Only 11% admit to never turning their reviews in on time.
This survey also asked reviewers why they decline reviewer invitations. 12% said that the amount of time given to review was too short.
The Publons report collected data from the ScholarOne manuscript submission system to see just how much time is taken to complete a review.
Publons reports that the median time from invitation to the submission of a completed review is 16.4 days (19.1 mean). They estimate that actually reading the paper and completing the review takes on average 5 hours. The Sense about Science survey reported 6 hours.
Let’s take a step back and summarize the landscape of data here.
- Millions of hours are spent by scholars peer reviewing the work of other scholars
- Peer review as an activity rates as extremely important to scholars in all surveys
- Most scholars claim that their work has been improved by peer review
- Speed of publication is extremely important to authors
- Finding good reviewers is getting more difficult
I know what you are thinking — aren’t authors and reviewers the same group of people? Yes and no. I don’t have a lot of data to point to but Springer Nature estimated that the overlap was about two-thirds. Anecdotally, journals have been reporting anywhere from 50-70% overlap. Still not too bad.
Why are scholars showing dissatisfaction with the review process when there is so much overlap between the communities?
Part of this disconnect may lie in the ambiguities around the process. Authors often see status updates that say “Under Review,” which can include a whole host of activities. Also, the time from reviewer invitation to decision letter to author is only as quick as the slowest reviewer. If the editor requires three reviews and one reviewer was late or requested an extension, the author must wait for the decision letter.
The worst is when a reviewer never returns a review they agreed to complete. In this instance, the editor doesn’t even know they need to find another reviewer until the review is already late and the process starts from the beginning. This wastes everyone’s time.
Solutions to Consider
So how do we bridge this gap between what authors expect and what reviewers can provide?
The Publons report found that there is an imbalance between who produces papers and who reviews papers. This will not be a surprise to many journal editors. The sharp increase in submissions across the board has come from China. However, journals are not using reviewers from China. Scholars in the US contribute 32.9% of all reviews compared to 25.4% of published article output. Conversely, China contributes 8.8% of the reviews with article output of 13.8%.
There is interesting data in the report showing that editors predominantly choose to use reviewers from their own region and editors are predominantly from the US and Europe. The output from China will continue to grow and more attention needs to be paid to including Chinese scholars in the peer review process. Author-focused workshops and trainings conducted in China, India, South America, Africa, etc., should include reviewer training as well.
Training of reviewers is always a hot topic. All of the surveys mentioned in this post have an aspect of reviewer training opportunities, and overwhelmingly, scholars say they want training. There are several publishers and societies that offer robust training programs online. I would love to know what the uptake looks like. I predict that training is something scholars might say they want on a survey, but actually finding the time to do some online training may be difficult.
It does strike me that part of reviewer training needs to be a frank discussion about saying “no” to a review request. Life happens and the best of intentions can sometimes go awry. That said, the 260 million researcher days spent waiting for peer review to be completed deserves some attention. Reviewer instructions and invitations should make it explicitly clear that if at any time, the reviewer cannot complete the review, they should inform the editor immediately.
Setting expectations for authors is also important. Advertising the average time to first decision while not discounting those papers desk rejected (not reviewed by outsiders) does not paint an accurate picture for authors.
An audit of the review process should be completed every year or so. At ASCE, we overhauled our reminders and cut the average time to review significantly (hint: send pre-reminders to reviewers). Detailed dashboards or reports from the submission and review systems can also help with identifying the stragglers.
Journals should also streamline the reviewer process and make sure that the instructions are clear on what is expected and what they should skip.
Finally, submission systems need to make peer review easier. A reviewer should be able to read and comment on a paper on their tablet, while commuting on a train, or flying to a conference (if they so choose). Reading a PDF and then compiling the feedback line-by-line is unhelpful.
There are steps that could be taken to close the gap of expectations. By widening the reviewer pool and training underutilized resources, the burden of review is distributed. At the same time, educating authors on the real-time expectations for a process they value is an important step in encouraging patience.