Frontispiece to Purgatory by Dante
Frontispiece to Purgatory by Dante (Photo credit: Wikipedia)

Do papers reporting uninteresting null results or confirmational results need to go through the same peer review process as papers reporting significant and novel results? Or do they require only passing a perfunctory editorial review?

Recently, over lunch, a friend and senior researcher at Cornell told me, quite emphatically, “I never review for PLoS ONE anymore. I spend hours going over the manuscript and writing detailed recommendations for improving the author’s paper, and the editor just discounts them all and publishes the paper as-is.”

Essentially, this researcher felt insulted that an editor would call on him for his expertise, but then ignore what he had to say. In a marketplace that is largely run on free labor, devaluing the work of volunteers is a risky strategy.

When my friend reviews a paper, Cornell is essentially paying for his time, overhead, and benefits. As a consultant, when I accept an invitation to review, all of these costs come out of my own pocket. I pay for the privilege of reviewing a manuscript, which is why I accept so few invitations these days and try to do them only when business is slow.

For me to accept an invitation to review, a paper has to report novel and interesting results. If it has been circulated as a preprint on arXiv, then I don’t benefit from seeing it a second time as a reviewer. Similarly, the paper must also pique my interest in some way. Reviewing a paper that is reporting well-known facts (like documenting the growth of open access journals, for instance) is just plain boring. Test a new hypothesis, apply a new analytical technique to old data, or connect two disparate fields, and you’ll get my attention and my time.

The only other category of manuscripts that I’ll accept for review are those that are so biased or fatally flawed that it would be a disservice to the journal or to the community to allow them to be published. These papers must really have the potential to do harm (by distorting the literature or making a mockery of the journal) for me to review them.

I recently submitted a manuscript to a multidisciplinary journal that credits itself with “fast publication” and “rigorous peer review.” I sent it there because my paper is just an update of an older study (testing the effect of open access on article citations), and the results still show no preferential effects. The journal claims to accept articles based solely on the soundness of its methodology and not on the significance or novelty of its results. Plus, the journal is open access and publication fees are low. If the journal can deliver on all of these promises, it will provide great value to many authors with manuscripts like mine.

But can the journal provide enough value to reviewers who are called upon to vet my work?

If other reviewers are like me (and my senior researcher), they will turn down the invitation to review. The manuscript I just submitted offers little incentive to the reviewer; I imagine that only academics with lots of time, a strong sense of duty, or a vehement dislike of my work will accept the responsibility. If reviewers were paid for their time, there may be a market for this kind of review. I just don’t know whether there is much of a voluntary review market for these kind of manuscripts. And if this article can’t find two willing reviewers, it cannot proceed along the path to publication.

The rise of journals specifically designed to publish scientifically uninteresting results is predicated upon finding competent reviewers, and it is not difficult to imagine that some manuscripts may never find individuals willing to accept that role. While the journal may promise fast publication from the date of acceptance, getting to acceptance may be, by far, the slowest part of publication. In the worst case scenario, a manuscript would languish in a sort of purgatory, waiting for the gates of peer review to open one day and allow the manuscript to finally ascend toward publication, or be returned to the author years later with a feeble apology from the editor.

This makes me wonder whether journals that publish manuscripts reporting null or confirmational results really need to put them through the time, energy, and expense of peer review. Papers that report on significant and novel results have an opportunity to change the direction of science and the standard of medical care, which is why it so important to vet them so carefully. But uninteresting null or merely confirmational results? [1]

Perhaps all that is needed is to send null and confirmational results through perfunctory editorial review. These articles may only require passing a checklist of required elements before being published in a timely fashion. The result may be a cheaper and faster route to publication, and for some kinds of publications, this is exactly the desired outcome.

[1] Negative results may be interesting if they challenge existing dogma or standard of care.

Enhanced by Zemanta
Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist.


15 Thoughts on "Do Uninteresting Papers Really Need Peer Review?"

The problem with not reviewing negative result papers is that they still might have poor methodology etc., and editors might not know enough to pck these out. What might help would be if papers like this didn’t have to have proper introductions and discussion: you could simply say “we are replicating the study of X, which they explained”, and “we found the same results.”, and only expand on this if you had to. I think this would copnsiderably cut down on the lengths of a lot of papers, and make people more inclined to review them.

Very interesting article, though I would have a different opinion on some topics. For example, in pharmaceutical research, in the end, what counts is meta-studies, so the “small null papers” or the “small well-known facts papers” could become crucial as a part of bigger studies. As some articles have shown, taking every clinical trial, as in regulatory files, vs. pubished trials, could make a huge difference.

Good point, but I am not arguing that null or confirmational results be withheld from publication. Indeed just the opposite–they should be published but not required to be put through the same peer review process as papers reporting novel and significant results. The paper could state what type of review it passed and this information could be indexed by a service such as CrossMark ( to support those doing meta-analyses.

I take it you are talking about a two track reviewing system, not a journal of uninteresting results. Two track rule systems are common but they have their design challenges. The first is how the threshold is defined? The second is who decides and how?

On the definition issue you note that null results can be very important, as can confirmation. I recall the wild flurry of attempted replication when cold fusion was announced. So neither null nor confirming results are necessarily uninteresting. I thus see no clear criteria for fast tracking but it is an interesting design problem. Who gets on the express lane?

The who decides and how might range from author self-certification to even more peer review. Ironically adding this two track decision step might slow things down, especially if there were disputes. I also wonder if authors want their work certified as unimportant or uninteresting? Perhaps not.

These are the sorts of complexities that make the design of rule systems so interesting.

To play the Devil’s Advocate–in a post-publication peer review system, one would make the opposite argument. The flashy, paradigm-challenging papers are going to attract lots of attention and scrutiny (think: Arsenic Life, Darwinius fossil). Since those sorts of papers are going to be challenged and either confirmed or proven wrong, do they need pre-publication peer review as much as the incremental papers that aren’t going to draw the same audience, the same level of interest and scrutiny?

You make a good point. At the other end of the spectrum will be papers that are turned down repeatedly by reviewers–often phrased in a response that includes “I’m too busy right now.” The lack of enthusiasm for reviewing a manuscript says something about that manuscript that is not revealed to readers when that manuscript finally makes it through the system. Some publishers will include reviewer comments with the publication. None that I know will report the number of failed attempts to get a review.

As usual, your posts really get people thinking.

The purpose of peer review is both to improve the paper and to help it find the right outlet. My experience has been that communities need to see multiple demonstrations of what an author might believe is a “fact” before the community internalizes and accepts the paradigm shift the initially novel paper represented. If there’s no peer review to validate the subsequent incremental papers, and no peer review to guide these papers (however roughly) to appropriate journals, that seems suboptimal on both levels — the initial finding was peer reviewed, but confirmatory findings are not, and confirmatory or incremental findings are published in off-discipline convenience journals? How would the community have its initial impressions underscored by subsequent studies, if those studies aren’t seen and seem to not have been treated as seriously?

One concern that comes to mind is that signals can be really important and subtle in academia. If an author stops submitting even incremental work to quality journals, that signal might be misinterpreted — does the author no longer believe in that line of research, or have funders found flaws and are pulling back, or were there flaws in the original study that haven’t been discovered and the researcher is just emptying the files before moving on? On the other hand, if an author continues to produce interesting variations on the fundamental premise, passes peer review again and again with such papers, and builds a legacy of findings — well, that usually has a lot of power, both in the literature and for the researcher’s reputation.

But let’s not kid ourselves — not every paper needs to be peer reviewed as rigorously as some, and there is no single thing called “peer review.” Back in 2010, I proposed we start publishing the equivalent of a peer review “methods” section, a short description of the peer review process that preceded publication.

So, to me, the binary nature of your question kind of assumes that peer review is all-or-nothing and has a single mode. In fact, I think it’s already quite variable, as you mention, with “methodological review” a recent form of review for the kinds of studies you’re talking about. However, the problem now is that it’s not clear that studies are being sent away from journals practicing “methodological review” because the editors feel they need more rigorous peer review, so the explicit levels your model suggests don’t yet exist. We just have variability, and novel, audacious papers can be published in journals that set a different threshold for peer review, while incremental papers might go through an over-engineered process given their basic premise.

Ultimately, I think if each journal were more transparent about what goes into its peer review and editorial review processes (as well as the audience it reaches), authors would make more informed submission decisions. That would be perhaps the biggest help of all.

Kent, your response also has a lot to consider. Peer review is a term that loosely describes a highly variable process. Detailing exactly what journals do would be very useful, especially if it can be done at the article level. This would allow a journal to consider different levels of review for different papers. Some journals could insist that all submissions are put through the same process as it conveys a sense of fairness and minimal standards for all papers published by the journal.

While I was initially skeptical of the approach used by F1000 Research (simple rating and short comments, e.g., I think their approach is quite novel and is matched with the expectations of both authors and reviewers.

I do think there’s a difference — perhaps an important one — between describing the peer review process an article went through vs. exposing the reviewer notes, ala F1000 Research. Private, pre-publication review allows for papers to quietly find their proper outlets. If the only option is to make reviews open as part of the submission process, I think that’s a review process some might avoid because it seems impossible to recover and move on. The submission may be final, whatever the outcome.

I would like to see editors note the review process an article went through, but that doesn’t mean trapping authors in a public exhibition of that process.

Just curious … do you leave open the possibility for an interesting null result? Some, in my view, are rather shocking and deserve to be looked at closely.

“Perhaps all that is needed is to send null and confirmational results through perfunctory editorial review”.

Very dangerous territory . . . . a slippery slope: What about “papers from good groups only need perfunctory editorial review” . . . . .

However there is increasing pressure on the system: see some comments I made on this recently at

This is a problem which probably most (all?) PLOS One editors meet at some point.

First, I strongly believe that “boring” results should be peer-reviewed, because of meta-analyses among other things. Also, they might constitute one piece of a future project, and it is important that new scientific projects are build on solid foundations.

Second, my solution is to look for group members of well established researchers in the field of the paper. I look first at postdocs, but also at students, as long as they have already published on the topic. I invite them as reviewers. I believe that we then have a win-win scenario, where the paper gets reviewed, benefiting from the expertise of an established lab; and the junior researchers get experience in reviewing.

Third, when I really find no one, I can act as reviewer myself, and plead with a colleague to help me this once.

But I agree that some papers are slowed down by the difficulty of finding reviewers. This is compounded by the tendency of some authors to suggest completely unrealistic big shots as reviewers for their “boring” papers.

Comments are closed.