The opinions of highly respected senior scientists tend to get a lot of attention, and a number (here, here, and here) have lamented the state of peer review. But what if the reviewer experience for high-profile researchers is the exception and not the rule?
One common complaint voiced by big name scientists is that the rapid growth in the number of papers submitted to journals has led to a massive increase in the demands on the reviewer community. After all, 10 years ago, there were fewer papers being published (Figure O-13 here), and now they’re getting a huge number of requests to review. A reasonable conclusion is that the peer review system is more overloaded now than ever before — it may even be close to the point of collapse.
This alarming conclusion may instead reflect a more benign change: the steady increase in their reputation over the last decade. If seniority alone is a good predictor of the number of review requests, these big name scientists would have seen a steady rise in their review workload even if overall submissions had stayed the same. So while it’s natural to join the dots between rising submissions and your own workload, and from there to the imminent demise of peer review, the connections may not really exist.
To start with, there’s no evidence that increasing submissions increases the per individual review burden — our study with data from the journal Molecular Ecology found that neither the average number of reviews per person nor the number of requests required to get a single review changed much between 2001 and 2010, even though the number of submissions almost doubled (see here and here).
Second, what is the actual relationship between “fame” and the number of reviewer invitations? To get a feel for this (again with Molecular Ecology data), we* picked sets of 30 reviewers who had received 1, 2, 4, 6, 8, 10, or 11 invitations since April 2008 and everyone who had received more than 13. To measure “fame,” we used Web of Science to calculate h-index values for everyone. Lastly, to approximate the match between the researcher’s interests and the scope of Molecular Ecology, we counted how many papers they had published in the journal.
These data are plotted in Figure 1; the lines give the relationship between number of Molecular Ecology papers and reviewer invites for a range of h-index values. Plots for separate bins of h-indices can be seen here.
Surprisingly, the main predictor of how often someone was invited was how much they published in Molecular Ecology. This makes sense because people who do lots of research in core molecular ecology areas will be suitable reviewers for a high proportion of our papers.
More surprisingly still, fame mostly reduced the number of review invitations: above about five Molecular Ecology papers, a famous scientist would get fewer invites than a lower h-index researcher with the same number of papers. This implies that editors deliberately pick more junior scientists from groups working on “core” molecular ecology topics, probably because that big name will decline their invite (see Figure 3).
For researchers with fewer than five papers in the journal, the opposite is true — the more famous these “peripheral” molecular ecologists are, the more invites they get. This is probably because the topic of the paper is quite far from our normal scope and the editor gravitates towards researchers they’ve heard of rather than taking a risk with an unknown junior.
The data above actually suggest that being famous generally makes you less likely to be invited, especially if you’ve publish with us often. However, as a researcher you can only be central to one field, and peripheral to lots of others, and hence being famous may mean you attract review invitations from many different disciplines.
Since this can’t be examined with data from just one journal, I did a quick survey around the biodiversity department here at the University of British Columbia, and asked 15 people with varying seniority how many review requests they’d had since the start of 2011; the resulting graph is below (the h-index data come from Web of Science). The black dots are estimates based on a monthly or weekly rate — even given considerable error in these, you can see that scientists with high h-indices get almost an order of magnitude more requests than everyone else.
This huge volume of review requests hitting senior academics means that they decline a much higher proportion than everyone else. This can clearly be seen in the Molecular Ecology data — a plot of proportion of review invites accepted against h-index gives a fairly strong negative slope. Furthermore, this slope is not driven by the points on the far right of the graph, as a plot of h-index just ranging from 0 to 20 yields a very similar relationship.
The big picture here is that senior academics are being bombarded with requests to review papers. However, since their journey along the x-axis of Figure 2 has coincided with a rapid growth in scientific publications, they may be misdiagnosing their review workload as a symptom of a system increasingly in distress. These academics then feel compelled to write editorials and opinion pieces complaining about the dire state of peer review, and the steady repetition of the “peer review is broken” mantra has led to a growing sense of panic in the research community.
In fact, there’s little evidence above that junior researchers (i.e., most of the people in the figures) are overburdened, and the evidence for a crisis is weak. Grouse all you like, but please stop yanking the emergency chain . . .
* Thanks very much to Loren Rieseberg and Arianne Albert for discussions and help with the data analysis