What does it mean when you claim a journal is peer reviewed? What does it mean when you say an article was peer reviewed?
Peer review is a major signal of quality, yet even major brokers in it hesitate to describe it beyond the fact that it can be achieved by sending out manuscripts for review by peers.
Peer review is a tool, not a standard. And not all tools are made of the same stuff.
But the label of “peer review” can be used in ways that sometimes seem downright misleading — one version is not like another, and some forms strike me as so cursory or ill-managed that they hardly qualify for the description. Yet, we never get beyond those magical two words “peer review,” so we never know what it really consists of.
One very down-to-earth editor compared peer review to chocolate chip cookie recipes — everyone has one, but some are better than others, and some bakers make great cookies simply using the recipe on the bag.
It’s partially the process, partially the ingredients.
I think it’s time we stop allowing “peer review” to be used as if it’s an immutable standard, as if it’s uniformly deployed and consists of the same ingredients every time.
We could improve peer review immensely simply by describing it in the same factual way we describe studies — with qualifiers like double-blind, randomized, placebo-controlled — to differentiate how various practitioners accomplish it. Then, we can better assess how well it’s being done, once we know which aspects of the process are being used.
After all, each journal has its own version of peer review, and there can be versions of peer review within journals (editorials and commentary and review articles are reviewed using a different process than is used for research papers, for instance).
Innovations in content forms can create the need for new peer review approaches — for instance, editorial staffs have to invent ways to peer review videos, interactive educational exercises, or animations. Yet, if we don’t reveal these new processes somehow, users and readers are left with just “white label” peer review.
By merely slapping “peer reviewed” on our journals, we’re stating a fact but obscuring some important and potentially helpful information. How many people participated? How long did it take? It’s all good information.
And it’s not that hard to describe.
Here are some potential categories I’d like to see:
- Number of outside reviewers
- Degree of blindedness (names and institutions eliminated from the manuscript, for instance)
- Number of review cycles needed before publication
- Duration of the peer review portion of editorial review
- Other review elements included (technical reviews, patent reviews, etc.)
- Editorial board review
- Editorial advisers review
- Statistical review
- Safety review
- Ethics and informed consent review
We have disclosure statements to deal with commercial influence. Why not a simple statement of peer review like:
This paper was peer reviewed by 3 outside reviewers blinded to the authors’ names but not institutions; it required 2 iterations of review prior to acceptance, and peer review required 14 weeks from start to finish; in addition to outside peer review, the paper was reviewed by a panel of editorial advisers, two statisticians, and a patient safety expert.
Simple statements like this would force shops that peddle sloppy peer review to state how they’re accomplishing it, how long it took, and a number of other helpful facts. And it would make them state their practices publicly and risk being discovered if they fudge the truth. It could end the abuse of peer review as a label some journals hide behind.
Imagine reading a paper that stated something like this:
This paper was peer reviewed by 1 outside reviewer not blinded to the authors’ names or institutions; it required 1 iteration of review prior to acceptance, and peer review required 1 week from start to finish.
Compare that to a grueling process for an ambitious study with surprising findings:
This paper was peer reviewed by 13 outside reviewers blinded to the authors’ names and institutions; it required 6 iterations of review prior to acceptance, and peer review lasted 43 weeks from start to finish; in addition to outside peer review, the paper was reviewed by this journal’s editorial board, a panel of expert advisers, technical reviewers, a patent reviewer, and a legal expert familiar with law in this area.
While you could infer from the second example that the paper in question was dicier in some manner, science is supposed to be dicey sometimes, and clearing a high bar shows the authors’ commitment, the journal’s high standards, and the dimensions that were probed. Readers might be curious to know how hard everyone worked to get the paper right, and it might provide the authors, reviewers, editors, and publishers with recognition for the amount of labor that can go into a difficult but worthwhile paper.
Right now, we rely on journal brands to imply levels of care, but these brands can change their processes or assert processes they don’t follow. As readers, we never know. And we certainly don’t know at the article level. Opaque labels like “peer reviewed” or a journal’s brand don’t tell us what we expect to know about other important aspects of a particular paper, like study design, author disclosures, materials and methods, or affiliations. So why do we accept such substitutions for peer review descriptions?
Peer review shouldn’t be sold without an ingredients list.
It’s simply too easy to claim something is “peer reviewed,” and there can be downsides to the credibility of scholarly publishing overall if we continue to allow this label to mask differences in approach, capabilities, and process.
Unlike some, I don’t think “open” peer review is the answer. There are many good and compelling reasons to keep peer review a private editorial process — not only does it allow for the necessary negotiations between journals, authors, and reviewers to be conducted in a safe environment, it also lets rejected authors save face, learn from reviews, and resubmit elsewhere without a blood trail.
At the same time, there’s no reason not to boast about an excellent approach to peer review. Top-tier journals sweat peer review, and review in general. But because the term is opaque and undifferentiated, they’re being lumped together with lesser peer review approaches. That doesn’t seem fair, and it certainly doesn’t help them in the micro sense, or science in the macro sense.
I say, describe it, be proud of it, and let the others try to match you.
Let’s describe peer review just as we describe funding, study designs, and methods. It’s integral to scientific integrity, misleading on many levels merely as a label, and a differentiator that should be unpacked.
At least readers would know which recipe was used and what the ingredients were.