A reviewer at the National Institutes of Healt...
Image via Wikipedia

According to Nature News, a study was presented at the Sixth International Congress on Peer Review and Biomedical Publication in Vancouver, Canada, that sought to show that peer-reviewer quality deteriorates with age. Unfortunately, if the Nature News article is a fair representation of the research (which is unpublished), the study seems to fall short of demonstrating much of anything.

The study was conducted by Michael Callaham, MD, editor-in-chief of the Annals of Emergency Medicine. He compiled and analyzed the scores editors at the journal had given to more than 1,400 reviewers between 1994 and 2008. Annals editors rate reviews on a scale of one to five, with one being unsatisfactory and five being exceptional. Ratings are based on whether the review contains constructive, professional comments on study design, writing and interpretation of results, providing useful context for the editor in deciding whether to accept the paper.

While average scores stayed the same at 3.6 on the scale of 5, more than 90% of reviewer scores fell during the period at an overall rate of 0.04 points per year. This leads Callaham to believe that reviewers get worse as they grow older:

Callaham agrees that a select few senior advisers are always very useful. But from his own observation, older reviewers do tend to cut corners. He notes that psychological research shows that experts in complex tasks typically reach a plateau and then stay there or slowly deteriorate. Perhaps by the time researchers are asked to review a paper at his journal, they are already experts. He suspects the same would hold true for journals across all fields.

It’s important to remember there are at least two moving parts to this scoring system — the reviews submitted and the editors reviewing the reviews. Therefore, there are several possible explanations for Callaham’s modest observations:

  1. Younger reviewers are less familiar with the conceits of peer-review, so they write long reviews full of minuscule details and tangents, unnecessary but commendable embroidery as they learn the ropes.
  2. Younger reviewers are trying to impress, so spend more time and care on their reviews.
  3. Younger reviewers are less familiar to the editors, so they appear as “fresh faces” and are scored higher as a reflection of novelty and to encourage them to review again.
  4. Older reviewers have been seen again and again by the editors, so are less likely to impress on the second or third time reviewing, especially assuming they are being seen by the same subspecialty editor.
  5. Older reviewers are reviewing more often, for more publications, and know the ropes. They do “cut corners” or use more jargon or bottom-line language to get their views across.
  6. During the timeframe under consideration, the evaluation tool became more familiar to users. The tool also “aged” as the users and reviewers did, a factor that can’t be ignored. Teachers at the beginning of their careers grade differently than they do later, and it’s likely editors who gained experience with the tool graded differently as time passed
  7. The editors at the Annals of Emergency Medicine probably aged the same amount as the reviewers over the years. Are the editors at Annals becoming grumpier graders?

Overall, attempts to make scientific peer-review more scientific seem misguided. The purpose of tools like these eludes me. If it’s to create a more stable, replicable system, that seems to occur when good people with sound judgment agree about what they’re doing and create a great journal. Quantitative measures of their intramural behaviors are pretty irrelevant, and even distracting.

But maybe I’ve been doing this too long, and I’m just getting older . . .

Reblog this post [with Zemanta]
Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

2 Thoughts on "An Old-Age Problem Among Reviewers?"

Practical versus Statistical Significance

The main message from this news, that quality of peer review declines with reviewer age, does not seem that novel considering the professional trajectory of scientists as they age.

Kent rightfully challenges the validity of this study, and whether review “quality” as measured on a five-point scale translates into higher “quality” articles being published.

We should also not forget the practical significance of the results. A decline of 0.04 points per year per scientist translates to a one-point score difference after 25 years (an entire career) of reviewing. While one may have the statistical power to make such an observation “statistically significant,” I’m not sure that it merits editorial policy change. And even if it did, could one really put pressure on older scientists to produce more detailed reviews given that that that process is largely voluntary?

What about labs where the PI hands off peer review assignments to the postdocs or students, does a cursory review of what the student has written and then sends that in as their review? This is probably more common than you think and would certainly skew a study like this.

Comments are closed.