Over the last several months, the ASCO editorial team has embarked on a deep dive of evaluating our flagship journal, the Journal of Clinical Oncology (JCO). With a new editor-in-chief and a new publishing partner, we are exploring all facets of the journal and developing a new 5-year strategic plan.

Part of this process has been to conduct a number of surveys. With the results of our author survey and our reviewer survey in hand, I was reminded of my post in 2019 on Balancing Author Satisfaction with Reviewer Needs, which we revisit below.

In this post, I shared data from large multidisciplinary surveys and was able to highlight areas in which needs of both the author and reviewer populations (which don’t overlap as much as we might think) are not being met.

Authors continually report that the turnaround time for peer review is an important factor in selecting a journal, though not the most important, and reviewers are pretty consistent in saying that they aren’t given enough time to complete reviews.

I have also written about reviewer fatigue and highlighted a survey that found that 14% of review declines were due to having too many other papers to review. Those of us who manage peer review know this too well. Editors have a pool of “go to” experts and tend to use those people more than others.

We also know from survey data in the post below that while English-language journals have seen huge increases in global submissions over the last 10 years, the pool of experts being used to review the literature largely remains with US and European-based reviewers.

What stood out to me in our JCO survey data is that our reviewer pool is largely the same as the reviewer pools of our competitors. Almost 52% of our reviewer respondents also review for Lancet Oncology, 46% JAMA Oncology, and 45% Journal of National Cancer Institute.

Oncology is not a particularly small discipline and there are certainly a lot of journals that cover the field. That said, there is little editorial board overlap (when looking at associate editors typically charged with assigning reviewers) among top journals. Still, regardless of how well an individual journal tries to not “burn out” a reviewer pool, it is impossible for us to know the workload of an individual across multiple journals.

This makes our work of diversifying the reviewer pool even more critical. Expert talent is not an endless resource and we must continue to find ways to bring in new people and balance the work more evenly. Collaborative efforts between journals in the same discipline to share reviews, reviewer training and mentoring programs to bring in new people, and technology tools for reviewer discovery are important initiatives. Further, journals would be wise to examine how their reviewers feel about the experience. Those journals that do better in matching reviewers to papers, provide excellent communication and customer service, and adequately recognize reviewers are likely to win out when two requests hit inboxes at the same time.

Balancing Author Satisfaction with Reviewer Needs

I was scrolling through Twitter with my morning coffee in my favorite place — my screened-in porch — and there were two threads that caught my eye:

The first was about a procrastination method applied to conducting peer review. The author of the tweet talked about carrying the paper everywhere for a while to “feel its presence” followed by staring “longingly” at the paper from afar. The process then involves picking up the paper from time to time to read the first sentence. This comes to an inevitable end when the reviewer gets an email from the editor and voila, the review is completed and submitted within 3 hours.

people looking at each other across a gap

What followed this tweet were anecdotes from other academics about their methods of procrastination when it comes to completing peer review.

The second tweet over coffee was complaining about only getting three weeks to review a “long, complex” article. The reviewer estimated that doing this review would take him two working days and he would need to reserve a whole weekend for this. It ended with “get real, editors.”

Both of these threads smacked of the disconnect between what authors value (fast time to publication) and what reviewers need (flexibility and time).

Author Expectations

There have been plenty of author surveys conducted regarding the value of faster times to publication.

A large global survey done by Editage found that 43% of authors surveyed said that ideally it would take less than 3 months to get a paper published. 69% said less than 6 months. When the same survey asked what was most important to authors in selecting a journal, fast time to publication was the third most important factor after journal Impact Factor (1) and scope of the journal (2).

Authors were also asked what improvements could be made to the publishing process. Improving the speed of publication was first followed by improving the overall peer review process and quality of reviews.

A different survey was conducted with authors of megajournals — PLOS One, PeerJ, BMJ Open, and Sage Open. This survey also asked authors what was most important to them in selecting a journal. Journal quality was rated highest, followed by “speed of review and publication process.”

The 2011 Peer Review Survey done by Sense about Science found that 42% of respondents found the peer review process to be slow or very slow.

Lastly, the Publons Global State of Peer Review Survey found that 29.1% of respondents were dissatisfied or very dissatisfied with the time to publication with 28.1% neutral and 42.9% satisfied or very satisfied.

These are just a smattering of survey data available, but it illustrates the importance of speed in publication from the author’s perspective.

Reviewer Realities

What about reviewers? Looking back at the second tweet lamenting the inadequacy of three weeks to review a long a paper, it seems speeding the publication process is not a priority for those conducting the reviews. There have been some excellent surveys done of peer reviewers, but few ask questions about the amount of time needed to complete a review. Still, here are some numbers:

The Sense about Science survey did ask reviewers to share some data about their reviewing habits. Of those surveyed, 14% said they typically return a review within 1 week, 31% said 2 weeks, 23% said 3 weeks, and 18% said 4 weeks. Note that this is reviewer reported times, not data based on actual behavior.

Further, 57% said they always turn reviews in on time. Only 11% admit to never turning their reviews in on time.

This survey also asked reviewers why they decline reviewer invitations. 12% said that the amount of time given to review was too short.

The Publons report collected data from the ScholarOne manuscript submission system to see just how much time is taken to complete a review.

Publons reports that the median time from invitation to the submission of a completed review is 16.4 days (19.1 mean). They estimate that actually reading the paper and completing the review takes on average 5 hours. The Sense about Science survey reported 6 hours.

Let’s take a step back and summarize the landscape of data here.

  1. Millions of hours are spent by scholars peer reviewing the work of other scholars
  2. Peer review as an activity rates as extremely important to scholars in all surveys
  3. Most scholars claim that their work has been improved by peer review
  4. Speed of publication is extremely important to authors
  5. Finding good reviewers is getting more difficult

I know what you are thinking — aren’t authors and reviewers the same group of people? Yes and no. I don’t have a lot of data to point to but Springer Nature estimated that the overlap was about two-thirds. Anecdotally, journals have been reporting anywhere from 50-70% overlap. Still not too bad.

Why are scholars showing dissatisfaction with the review process when there is so much overlap between the communities?

Part of this disconnect may lie in the ambiguities around the process. Authors often see status updates that say “Under Review,” which can include a whole host of activities. Also, the time from reviewer invitation to decision letter to author is only as quick as the slowest reviewer. If the editor requires three reviews and one reviewer was late or requested an extension, the author must wait for the decision letter.

The worst is when a reviewer never returns a review they agreed to complete. In this instance, the editor doesn’t even know they need to find another reviewer until the review is already late and the process starts from the beginning. This wastes everyone’s time.

Solutions to Consider

So how do we bridge this gap between what authors expect and what reviewers can provide?

The Publons report found that there is an imbalance between who produces papers and who reviews papers. This will not be a surprise to many journal editors. The sharp increase in submissions across the board has come from China. However, journals are not using reviewers from China. Scholars in the US contribute 32.9% of all reviews compared to 25.4% of published article output. Conversely, China contributes 8.8% of the reviews with article output of 13.8%.

There is interesting data in the report showing that editors predominantly choose to use reviewers from their own region and editors are predominantly from the US and Europe. The output from China will continue to grow and more attention needs to be paid to including Chinese scholars in the peer review process. Author-focused workshops and trainings conducted in China, India, South America, Africa, etc., should include reviewer training as well.

Training of reviewers is always a hot topic. All of the surveys mentioned in this post have an aspect of reviewer training opportunities, and overwhelmingly, scholars say they want training. There are several publishers and societies that offer robust training programs online. I would love to know what the uptake looks like. I predict that training is something scholars might say they want on a survey, but actually finding the time to do some online training may be difficult.

It does strike me that part of reviewer training needs to be a frank discussion about saying “no” to a review request. Life happens and the best of intentions can sometimes go awry. That said, the 260 million researcher days spent waiting for peer review to be completed deserves some attention. Reviewer instructions and invitations should make it explicitly clear that if at any time, the reviewer cannot complete the review, they should inform the editor immediately.

Setting expectations for authors is also important. Advertising the average time to first decision while not discounting those papers desk rejected (not reviewed by outsiders) does not paint an accurate picture for authors.

An audit of the review process should be completed every year or so. At ASCE, we overhauled our reminders and cut the average time to review significantly (hint: send pre-reminders to reviewers). Detailed dashboards or reports from the submission and review systems can also help with identifying the stragglers.

Journals should also streamline the reviewer process and make sure that the instructions are clear on what is expected and what they should skip.

Finally, submission systems need to make peer review easier. A reviewer should be able to read and comment on a paper on their tablet, while commuting on a train, or flying to a conference (if they so choose). Reading a PDF and then compiling the feedback line-by-line is unhelpful.

There are steps that could be taken to close the gap of expectations. By widening the reviewer pool and training underutilized resources, the burden of review is distributed. At the same time, educating authors on the real-time expectations for a process they value is an important step in encouraging patience.

Angela Cochran

Angela Cochran

Angela Cochran is Vice President of Publishing at the American Society of Clinical Oncology. She is past president of the Society for Scholarly Publishing and of the Council of Science Editors. Views on TSK are her own.

Discussion

5 Thoughts on "Revisiting: Balancing Author Satisfaction with Reviewer Needs"

Hmmm! “Expert talent is not an endless resource” “Finding good reviewers is getting more difficult.” Well reviewers who are both “expert” and “good” took some time to get where they are. The many demands on their time include reading journals in order to maintain their expertise.

Those who are not narrowly based have to daily scan the TOCs of a wide number of journals. One Publisher, who shall be nameless here, teases its readers by providing incomplete TOCS. The reader is invited to visit the publisher’s webpage to see the rest. This limits the number of TOCs that can be scanned in the time available. [There are a few other publishers who engage in a milder teasing.]

I personally to not accept that Publisher’s invitation to read the full TOC. That means that I do not read, sometimes as much as half, of that TOC’s contents. Two points assuming there are perhaps others like me:

1. Author’s who want their work read are advised not submit to that Publisher.
2. Potential reviewers should push back at that Publisher and refuse invitations to review papers submitted to its journals.

It seems like just yesterday in 1970 when it took at least 15 months to receive an acceptance letter and then from 2-3 years to see the article in print. Mails were slow and of course there was a page limit for the journal and the que was long!

“Why are scholars showing dissatisfaction with the review process when there is so much overlap between the communities?” It doesn’t really matter that authors and reviewers are “the same group of people.” Authors and reviewers occupy completely different roles in the scholarly communication ecology, and because of those different roles, have different beliefs, values, and expectations. Authors are often annoyed when it takes 6-8 weeks to get one of *their* manuscripts reviewed. They think it should take half that amount of time. (And the result should be unambiguously positive!) But ask that very same person to review *someone else’s* manuscript in 6-8 weeks, and you’ll get a very different response. “Why, who could *possibly* review a manuscript in such a short period of time?! (Especially one so flawed and unworthy of publication!)” I don’t believe any system can be constructed to over come this absurdity. People will always be people, no matter how hard you try to make them rational.

Great topic. Thanks for covering it. For many years, as an editor-in-chief of a general interest surgical journal, I found myself on the road giving “reviewer trainings”. Never got a strong sense that the powerpoint lecture is a very “sticky” educational intervention.

So we created a freely available online tool. We purposely did not copyright any portion of it, in the hopes that other journals would borrow, emulate, or even suggest their reviewers use it (there is nothing proprietary about it, and the output of this online “app” can be copy/pasted into any journal’s reviewer system).

It’s based on an algorithm that brings up specific methods questions for different article types (eg, articles on treatments, diagnostic tests, exposure/risk/harm, natural history, meta-analysis, etc each have different methods elements that are important), as well as a few items that are important for all article types. It ensures that they give some thought to issues of positive-outcome bias in no-difference studies, which has been a problem in the past.

The idea here is that by walking reviewers through the job in the context of the article they’re reading, it might start to get them thinking about the key methods questions to ask of any article they might read.

We’ve had good experiences with it; reviewers who use it seem to like it, especially those new to the job.

By taking a systems-based approach (rather than one-off “reviewer trainings” that don’t seem to stick), we’re seeing some improvements. I think reviewers appreciate having a tool that helps them to focus on the key elements of the job; they’re volunteering their time, so whatever we can do to make it easier and more efficient seems worthwhile.

Give it a try if you like: It’s on the right-hand side of our homepage, http://www.clinorthop.org, under ​​​Links to Author/Reviewer Tools, click Peer Reviewer Tool. Below that are instructions. If you want to hit it directly, the URL is https://journals.lww.com/clinorthop/Secure/Pages/reviewertool.aspx.

We are not making any money at this. Feel free to borrow or adapt the idea for your own journal if you wish to, or point your reviewers to the tool (it generates a Microsoft Word document at the end, which they can upload or copy/paste to your reviewer interface). The principles apply to all medical and surgical research, but will probably not apply to other disciplines (such as humanities or social sciences).

Seth S. Leopold
Editor-in-Chief
Clinical Orthopaedics and Related Research

Thank you for sharing that resource. A lot of reviewer training happens unofficially, meaning that a senior person is asked to review, and has students and post-docs review the paper in tandem. Some journals have encouraged this behavior and asked that those student reviews be returned to the journal for “credit” to the reviewer. Other programs like editorial fellowships and reviewer mentoring programs are also effective, but can only reach a small number of people at a time.

Comments are closed.