Last week, Slate published an ill-advised hatchet job by an education columnist specializing in the humanities. The topic? Peer review. While the particulars of the Slate article inspired me to add a sharply critical comment, and I was only one among dozens who found fault with the article, there is a fundamental  question worth considering in all this:

What is “peer review”?

she blinded me with science
Image via Wikipedia.

Peer review certainly isn’t one thing. Arguing as if it were is a fundamental error made by the Slate columnist and often by others. Peer review is constantly evolving. It is actually difficult to define precisely because of all the variations it can have. All this makes general approbation or condemnation of peer-review difficult to take without the requisite grain of salt.

We often make an “availability error,” generalizing to all forms of peer review based on the limited information we have about the kinds we’ve experienced. Combine this with how peer review seems captive to history — as if it were invented once and then taken forward like a precious gemstone — and it’s easy to see why there’s confusion about how variable peer review can be, how it’s evolved, and where it’s going.

Here are some variables around peer-review we have to understand before we know what kind of peer review we’re actually talking about:

  • Is it blinded?
  • If it is blinded, is it single-blinded or double-blinded?
  • Is there statistical or methodological review in addition to external peer-review?
  • Are the peer reviewers truly experts in the field or a more general assemblage of individuals?
  • What are the promises and goals of the peer review process?
  • What type of disclosure of financial or other potential competing interests is made? Are reviewers aware of these?
  • Is there a senior editor of some sort involved along with outside peer reviewers?
  • Is the peer-review “inherited” from another body, such as a committee or a preceding journal process (e.g., in “cascading” title situations or when expert panels have been involved)?
  • Are there two tiers of peer review within the same journal’s practices?
  • Is the peer-review done at the article level or at the corpus level (as happens with some supplements)?
  • Is plagiarism-detection software used as part of the process?
  • Are figures checked for manipulation?
  • Is the peer reviewer graded by a senior editor as part of an internal evaluation and improvement process?

Each one of these points is worthy of discussion, and the myriad combinations of these variables constitute peer review in the large sense. However, using any particular combination as a single encapsulation or representation of peer review is akin to using a single automobile model to stand in for all cars ever and to come. You may have a Pinto on your hands, you may have a Tesla, or you may have a Camry. Each one shares some common features, but what matters are the differences.

What is the ideal peer review process? It depends on the goals and intent of the journal and its editors. This is another place where generalizations can get in the way. Medical journals generally have very high standards for peer-review, while journals that don’t deal with health decisions or life sciences can have different standards and requirements without performing any disservice to readers. Computing journals, humanities journals, and others develop their own standards and should take pride in what they do if their readers value and trust the output. There is no single formula. There is, as the pediatricians say, “a wide range of normal.”

So-called “post-publication peer review” is an interesting animal in its own right. As I’ve written about before, there are many problems with equating commenting with peer-review, including a lack of pre-qualification and no central evaluative authority or accountability. And it’s nothing new, in a sense. The most basic form of post-publication peer-review is called “reading.” A more involved version is called “a discussion” or “a journal club.” The more elaborate version yet is called “science.” Those who want to capture it in computer code are making essential trade-offs, many of which we don’t understand completely yet. One of these is the cost of entry into public post-publication peer-review systems — the effort it entails, the risk for publicly being shamed for saying something ill-advised or clearly wrong, and the lack of actual give-and-take. Again, post-publication peer-review is not one thing, and just because you capture some aspect of it in commenting software doesn’t mean you’ve done anything much.

Peer review is a category of activities, not a singular activity. In track and field, there are the running events, but this is a category of events. The sprint events differ dramatically from the distance events, which are blended in the team relays. Even within the sprint events, for example, there are important differences between the specific types (outright sprint vs. hurdles), just as there are middle-distance events and long-distance events.

Peer review, as a category, is also changing, and quite rapidly. In the past decade, we’ve added image manipulation tests, plagiarism detection software, new disclosure rules, and a growing roster of reviewers from around the world thanks to online submission systems and email. Now, publishers can consider signing on with transparency services and preliminary review services. Peer-review is evolving, sometimes rapidly, sometimes slowly, but progress is being made.

With this degree of variability and the constant evolution it’s undergoing, peer review is difficult to praise or criticize uniformly. It is more appropriate to critique certain aspects of it.

However, in critiquing it, we also encounter our own expectations of what it can provide. Some people believe peer review should find all possible errors or misinterpretations in or around a study. Others believe peer review can stop marginal studies from being published. What peer review does is limited, important, and useful. There is nothing wrong or incompatible with those aspects. Even the most rigorous peer review at a journal cannot stop a study from being published somewhere. Peer reviewers can’t stop an author from self-promoting a published work later. Peer reviewers do not check all the datasets, rerun calculations of p-values, and so forth, except in the cases where statistical reviewers are involved — and even in these cases, statistical reviewers often check the methodologies used, sample some data, and move on.

Failures we attribute to peer review often belong to authors. If someone can name a so-called “peer review scandal” that doesn’t boil down to authors making outlandish claims, fabricating data, or misinterpreting their own data, please point it out. Blaming a process may help us avoid confrontation or accusation, but it also can mislead us to fix something that requires nothing more than sensible updating and improvements while allowing authors to abdicate responsibility.

Does peer review work? Is peer review broken? The vast majority of authors believe it improves their final work, and since it’s evolving from this solid base, it’s clearly not broken. But before we can have a useful discussion about its purpose and effectiveness, we need to agree on which approach to peer review we’re talking about, then whether our expectations of it are reasonable and accurate.

Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

12 Thoughts on "Your Question for the Day — What Is “Peer Review”?"

Often “peer review” as a term and concept is used too loosely, particularly when criticizing or defending the current situation(s).

On one hand, a narrower definition of “peer review” would be good, so that office practices and editorial procedures before, during and after the manuscript has been under the evaluation of “peers” would be discussed separately. If it takes a month for journal’s submission procedure to get the ms in front of reviewers, or if the deciding editor is a careless biased laggard, those are not faults of “peer review” or reason to call for its demolition.

On the other hand, editorial decision-making is an unseparable part of “peer review”, in the sense that having your manuscript peer reviewed does not mean your work is now labeled “peer reviewed”. No. The label requires that some entity recognized to have authority by the community has “accepted” the work, ostensibly based on editor’s consideration of statements given by peers (and hopefully only those, not on the author’s or reviewer’s perceived prestige in the field, or the sensationality of the claim, or grandeur of the research institute, or personal trust between editor and author/reviewer, or… the list goes on).

In addition to “peer review” not being a one thing, much of the criticism should be properly addressed instead to editor competence, diligence, and impartiality, and to procedures and policies inside editorial offices that strictly speaking should not be put under the umbrella term “peer review”.

Excellent post. The purpose of the specific instance of peer review is indeed an integral factor of the process. I suppose most peer reviewers are not in the least interested in catching lies and falsities that stem from ill-intention, but are for the most part nudged by a passion for the ideas and experiences that advance their respective fields and expand their awareness.

The most interesting to some will be the cases where we are dealing with the disruption of existing paradigms, where new connections are being made. In some cases of this sort there may be very few peers and the cases may be about likelihood, being less wrong, more promising.

In TS Elliot’s definition of “what is worth criticizing” many objects undergoing peer review simply don’t make the cut – or they are very tiered

Clearly Rebecca Schuman has an ax to grind. I’ve had everyone of her complaints happen to me over the years but overall most reviews have been pretty well done. It really comes down to the editor to manage the process and make it timely and as fair and constructive as possible.

A journal is shooting themselves in the foot if they abuse authors. Authors have a lot more choices today where to publish and the quality of the service a journal provides including fair and timely peer review is a big factor in where authors publish.

Should we not add to this list things that relate to the motivation(s) of the reviewer such as compensation or the lack of it. Where compensation is non-pecuinary, how does that compensation effect the diligence and objectivity of the reviewer? Other considerations might include whether a reviewer is acting alone or in concert with other reviewers.

Mr. Lowney: I have been doing this for some 40 years and in STEM publishing have never paid a reviewer. Do you think one can buy a good review of a bad submission and why would a publisher want to do that?

Kent writes only about peer review as it works for journals, but of course there is peer review for monographs also and that kind of peer review differs in very significant ways from peer review of articles. The role of the publisher’s staff editor is much greater in the review of monographs, and peer review for journals does not involve a decision at the end of the process by a faculty editorial board, which is a key part of the process at it occurs at university presses. So, if there is substantial variability of peer review with the journals community, there is even more variability when monographs are considered also.

Thanks, Kent. This is great. It is very hard to explain to people outside the scholarly publishing world how this all makes sense. The entire ecosystem is a little off the wall. On occasions, those of us who manage peer review are called on the carpet to defend how a paper was published that others think is bogus. I have seen very public campaigns against chief editors over this. But the truth is, some work is theoretical. Lots of those papers are based on a factual foundation with layers of assumptions applied. The assumptions have to be within the realm of possible but often these papers encourage further examination by others.

A good editor wants to encourage innovation and debate within the community represented. If the reviewers and editor did not uncover a fatal flaw in the paper and are interested in seeing what comes out of the ensuing discussion, they accept the paper. If no one cares or think that the theories are improbable, the paper sits in the archive of published works, barely ever downloaded or cited. If the theory is tested and turns out to be false, those papers will be published to “correct” the literature. If the theory turns out to be a success, everyone is pleased!

Letters to the Editor and Discussion and Closures or commentary certainly serve this purpose. As unsatisfying as it is to hear, the response to someone that claims “peer review failed” because they don’t agree with a paper is to write your own paper to refute the claims.

The Slate article is a “puff” piece that, while labeled as “peer review”, had more to do with general article reviewing and much less to do with the academic community which is of interest to readers here.

Looking at the “pub/perish” community, the careful arguments presented in this piece seem almost irrelevant since the institution that weigh the value of articles for pub/perish rarely consider the nuances that this piece dwells on. The basic attitude is whether the journal uses “peer review” of basically the “double blind” flavor, not caring to delve too deep as to what that means.

In an earlier exchange in SK it was argued that publishers provided real value in managing “peer review” because to find a reviewer was a time consuming chore, almost suggesting that one had to go out and check long lists to find a person to fill that responsibility. That does not speak well of the idea that journals have such a nuanced criteria for peer selection.

One does not argue that peer reviews have no value in improving the publication. But given the current volume of materials in journals as opposed to that of books, the idea of a true exchange between reviewers and authors to carefully curate an article seems more like an argument made by Alice’s Humpty Dumpty both in form and practice.

Strip away the pub/perish default for promo/tenure and the form/function of peer review might be that which Kent devoutly wishes might be worthy of a discussion of the above nuances and not the persiflage that gets brushed aside at time of a faculty review. It probably would also push more scholarly works toward publications such as PeerJ and OA in general.

Reblogged this on and commented:
Kent Anderson’s thoughtful blog on peer-review, a topic always relevant to good journal publishing, is a well-written synopsis of its adaptive subject, as an activity, a scholarly standard, and as a complex process.

Comments are closed.