How quickly can you get a paper through peer review? Editors and boards are under tremendous pressure to decrease the time it takes to get a paper from submission to first decision and then to acceptance. Rapid publication is a major selling tool for any journal.
For the sake of this discussions, I’ll say “traditional” peer review consists of an editor, perhaps an associate editor, and 2-3 content expert reviewers providing feedback to the authors on whether their paper is suitable for the journal. Revise, resubmit, repeat.
Scholarship being what scholarship is, and academic reviewers being who academic reviewers are, it is rare for a paper to be accepted with no revisions requested.
This process of revision is time-consuming and several journals are trying out new things in order to speed publication, while still conducting what is more or less traditional peer review. One example is to focus reviews on whether the “science” is correct. I put “science” in quotes because peer review processes belong to science and humanities publications. I would argue that there is a “science” to developing a strong thesis in all humanities papers as well.
Another tactic is to dispense with requesting major revisions and instead reject the paper while encouraging the authors to fix it and try again. I am going to call this “decline with encouragement to resubmit.”
I have to admit that I have advised editors to lean toward rejection over required revisions for a number of reasons.
First, papers going through review are a lot more likely to be accepted. For journals I have managed, the number of “major revision” papers that are eventually accepted stays solidly between 80-90%. When editors, reviewers, and authors have put time into critiquing and improving a paper, it just seems downright unfair to reject the paper. But, there can be a resignation to accept an okay paper at this point too. The editors and reviewers are tired of seeing the paper and they accept the paper as passable. For journals that are looking to publish top tier content, “passable” is not good enough.
Second, journals do lose a lot of credibility when the process takes forever. Editorial offices encourage editors to tighten the time given to reviewers and sub-editors in order to speed things up a bit. The last resort is to start decreasing the amount of time given to authors to make revisions. Even in circumstances where the revision time is generous, editors need to keep that due date in mind when making a decision.
If the editor feels as though it would be difficult for an author to make the changes required in the amount of time given for revisions, then the editor should reject the paper. Decline with encouragement to resubmit may be appropriate for papers where the topic is interesting but there is too much work required to keep the paper in the review loop.
The dates that are typically published alongside papers — Received, Revised, Accepted, Published — are important. If the gap between received and accepted is too long, others criticize the journal’s turn-around time even if in that case the editor worked with the authors through multiple rounds of revisions.
If the time from received to accepted is too short, questions may be raised about the paper. This happened recently with a few papers on toxicity of e-cigarettes. The papers were criticized for having too short a review period. What actually happened was that the paper was reviewed by Journal A and declined. Through a paper sharing system, Journal B received the paper, with the reviews and the author revisions, and accepted the paper. There was no way to tell in the final publication that the paper had gone through this process prior to being submitted to Journal B.
Third, some papers eventually need to be cut loose. These are the papers that despite getting detailed reviews, fail to improve to an acceptable level. Some journal editors have a strict policy on the number of revisions that are allowed. Another problem being reported by journal editors is that authors are ignoring reviewer comments. I think it is safe to say that authors who provide a rebuttal but choose to ignore half of the comments do so at their own peril. A simple sentence or two explaining why the change was not made will typically suffice.
Authors seem to be torn. There are lots of advice blogs for authors that tell them not to sweat the revise and resubmit decision. It means the journal editor likes “something” about your paper. Others see it as an opportunity to dismiss the feedback received and simply submit the unedited paper to another journal. This comes with risks.
In many fields, the reviewer pool is smaller than you would think. It is not uncommon for the same reviewer to get the same paper back again. I hear it all the time. The reviewer reviews a paper for Journal A and it is declined. Detailed feedback was provided. That same reviewer gets the exact same paper from Journal B. Feeling ticked that all of the feedback was ignored, the reviewer either declines the invitation to review and tells the editor why, or accepts the invitation and tells the author that he/she is still recommending the paper be declined.
Where taking a paper elsewhere might work is when its clear from reviews that the paper is not a good fit for the journal. In this case, an author may do well to skip to a more appropriate journal. As a manager of peer review, I can say that the journal office and editor would greatly appreciate a notice if you do not intend to send in a revision. Withdrawing the paper from the system is helpful.
Academics in general seem torn over making revisions in peer review. This always strikes me as odd because many of these same authors serve as reviewers and they get equally upset when their reviews are dismissed. Some complain that editors are requiring things that are not important to the paper. Here is my advice on that one:
- Ask your co-authors or colleagues who read your paper prior to submission what they think of the requested changes. Do they agree that the changes are unnecessary?
- Explain in a rebuttal document why you think the request is unnecessary. Most editors appreciate a well laid out argument. Most editors do not appreciate authors ignoring reviewer comments.
- Call the editor! Editors are human beings and they will talk to you. Tell them that the reviewer comments seem a bit off. Don’t be angry and defensive. Ask them to help you navigate the comments. Maybe you weren’t clear about something in your paper. Maybe the reviewer was not an appropriate person to review the paper. The editor may not know that. Before wasting your time yanking the paper, reformatting the paper for another journal, and waiting for a first round of peer review elsewhere, take a few minutes to have a conversation with the editor.
The overall point here is that journal editors are usually practitioners or academics in the same field as the authors. They have been there and they are still publishing papers. They want to know that they are not wasting their time and the time of their reviewers in sending out feedback. They truly believe that it is their job to help authors publish good content. Many are volunteers and given the time commitment required to be an editor, I’d say they feel pretty passionate about this.
The industry is experimenting with other innovations around peer review — portable reviews, open review, preprints with comments prior to submission, etc. Some fields will welcome these with open arms and others will resist. Over time, each community will need to decide what works best for them.
In the meantime, authors should really put as much thought into how to respond to a review as they did in deciding where to submit. It is clear to me that many do, and equally clear that some don’t.
11 Thoughts on "Should You “Revise and Resubmit”?"
Speaking from the perspective of an author, (a) we have already seen through the reject-and-resubmit route as an attempt to make journal statistics look better, and we’re not impressed; and (b) the most important statistic anyway is time to first decision. Personally, I would say that major revision would generally be a better option – so long as the editor handling the ms. has the strength of character to insist that major does mean major, and reject revised versions that haven’t made major changes.
Still, reject-and-resubmit isn’t too bad a way of dealing with papers, I agree, so long as there is some continuity maintained with the resubmission. Here’s how not to do it: recently I had a paper get a reject-and-resubmit decision; I made the best effort I could to revise it, including redoing all the statistical analysis and redrawing the figures, and sent it back, together with a detailed set of responses to all of the earlier criticisms; and quite some time later it came back with a second reject-and-resubmit decision. It had gone to completely new reviewers, who had ignored all of our responses, as (less forgivably) had the editor.
Let’s talk about completely new reviewers for a second. One BIG complaint from reviewers is that if they didn’t like the paper the first time, they don’t want to see it again. This happens with the regular revisions as well. It’s unfortunate and we have added this to our reviewer guidelines but it’s really hard to avoid.
Sometimes the editor will be the reviewer, but many see that as being unfair to the author as well.
For papers declined and then resubmitted, it may be that over the time period between revision, one or all of the original reviewers aren’t available. I suppose the editor could ask the authors if they prefer to wait until someone is back from sabbatical, but that seems unlikely.
I agree that time to first decision is important but the public stats don’t show this at all.
Oh, I accept new reviewers – but the key point is, not if it means restarting the whole submission process from zero. The editor can still have input, and should be making the final decision anyway. Why bother revising a paper, if the revisions are going to be ignored? Put it another way – from an author’s perspective, if there is no continuity in the process, then there is no advantage in resubmitting to the same journal.
“I agree that time to first decision is important but the public stats don’t show this at all.”
Is this dependent on publisher? Many journals do put this information on their website somewhere – it can serve as a genuine inducement when choosing where to send a ms.
The “decline with encouragement to resubmit” decision can be tricky. After much discussion in an editorial board meeting, we eliminated it from the pick list for the decision templates. The main reason was the feeling among editors that the resubmissions were rarely really re-written in whole, and the result could be a very drawn out process of over a year that wore down the authors, reviewers, and the editor. Often the “decline and encourage” manuscripts were dissertation excerpts from an early career first author, where the science was probably solid, but was poorly written (despite the entire committee being signed on as co-authors). We seemed to be leading authors on in instances where they eventually got re-rejected anyway, or at best, had a just passable paper after a lot of time and work from the reviewers, editor, and editorial office.
Just rejecting the manuscript not only put it out of our misery, but provided the authors with less ambiguous choices. Do they take the criticisms to heart and send a better version to a different journal? Do they invest the energy in arguing the decision as misguided or unfair? Or just shop it “as-built” elsewhere, hoping it doesn’t land with an earlier reviewer from that small reviewer pool?
I agree Chris. We (ASCE) eliminated it years ago but the editors wanted it back. I think it’s true that frequently the papers come back with very little changes. I also think the editors use it because they feel like it’s a nicer thing to say than “decline.” I try to tell them that in the long run, it’s not nicer because you have lead the authors on–in those cases where you really aren’t interested in the paper. I tried to get rid of it again last year but that didn’t work.
Because we have a lot of journals, we also see papers declined form one and submitted unchanged to another. This is tricky because the editorial board is different and while the new journal wants to give the paper a fair shake, they also want to know why it was declined in the first place.
Good posting, chock full of great advice! Perhaps the most important is that it is the editor’s job to publish good content. An editor’s most critical task is to decide how to invest the journal’s limited resources, especially that of reviewers’ time.
Once reviews indicate a paper is on the right track but revisions are needed, the journal has an interest in its success: publication is not a sure thing, but the odds are favorable. If the authors have any questions about how to proceed, editors should be happy to help them.
It is interesting to read of the journals who have moved away from Decline and Resubmit. I introduced this when I took over as editor of a sociology journal for exactly the reason identified by other contributors, namely that Revise and Resubmit created too much moral pressure to publish the resulting text. Some of my predecessors had tried R&R with major revisions and R&R with minor revisions but that did not seem to me to have sufficient clarity as a signal to authors about how much work was needed – Major R&R still seemed to generate the same pressures even if the revisions hadn’t really fixed the problems. My team created a decision Reject with invitation to resubmit, which seems to us to work pretty well. We only use R&R for papers that we definitely expect to publish eventually. ‘Reject with’ decisions have always gone to reviewers and come with editorial feedback – a pretty high proportion of them do eventually make the grade but the authors do seem to get the message. We have also worked quite hard at filtering out more papers within the editorial team, especially those that are clearly inappropriate for the journal, rather than sending them out to review. We try to do this within a couple of weeks of submission and authors do seem to appreciate a fast decision to reject with some outline explanation, even if they are disappointed by the outcome. At least they can get on with their lives. The team are also clear that reviews are advisory – it is our responsibility to make decisions and sometimes this does mean trading off waiting for delinquent reviews against a decision on more partial information.
I’m interested in what you mean by the “science” involved with peer review of articles in the humanities. Are you suggesting that there can be for humanities a peer review like that used for PLoS One that focuses only on methodological soundness? If so, I’d like to hear how “methodological soundness” would be judged in the humanities since there are so many competing methodologies.
Two points regarding peer review in social sciences interest me, but they are tangential to your blog post.
1) Social science needs better data. Mostly more of it.
On the subject of peer review within social science and humanities. If it has no data, no peer review is required because it has no science. If it has data, the data should be checked, and authors contacted to make sure they’ve published all their data, including null findings. It’s not good enough to just check p values are <= 0.05. p <= 0.05 is a borderline finding, indicating that further work is justified before coming to a conclusion. A small study with p <= 0.05, justifies a larger study aiming for, say, p <= 0.01. In most cases, there is no justification for publishing studies with p <= 0.05.
For more on this read Andrew Gelman: http://andrewgelman.com/2016/09/21/what-has-happened-down-here-is-the-winds-have-changed/
2) I'd love to know what changes, if any, are made to a paper before publication. I'd love this to be open. At least then we might know who resubmits to alternative journals when changes to their masterpiece are asked for. Who disregards peers advice, etc.
I say this because I recently blogged about a junk paper, which the journal editor said was peer reviewed. One substantive issue we found with it necessitated a complete change in their data. Apparently, because the data (which was open source) had been transcribed nearly entirely wrongly! I wonder, just how did the peers and editor miss that the first time around? Strange, despite totally new data, the authors say their findings will not substantially change. They will issue a correction rather than a new article. If no change in conclusion follow entirely new data, then what was the point in publishing any data with the article? Might this even be a case of authors including data to give their bias (AKA 'hypothesis') a sheen of scientific legitimacy it does not deserve?
Working in scientific communications in the industry, I have always used the approach of encouraging both internal and external (academic) authors to address reviewer comments even if the manuscript is rejected. Addressing some of the reviewer’s comments strengthens the manuscripts. In addition while submitting to a second journal, I have disclosed that the manuscript was rejected by the previous submission in my communication. Also shared the reviewer comments received from the first journal and shared how we have addressed the reviewer comments to strengthen the manuscript. Transparency is the key