Elite medical editors are not good at predicting the citation potential of research manuscripts, a new study concludes.

The paper, “Evaluation of editors’ abilities to predict the citation potential of research manuscripts submitted to The BMJ” was written by Sara Schroter and others at the BMJ. It appears in the Christmas issue of The BMJ, an issue traditionally reserved for humorous and satirical research.

Schroter and others compared the ability of BMJ editors to predict the citation potential of manuscripts submitted between 2015 and 2016 to their actual citation performances in 2022. They concluded that “editors weren’t good at estimating the citation potential of manuscripts individually or as a group,” adding for emphasis that, “there is no wisdom of the crowd when it comes to BMJ editors.”

Scene of a comedy club with microphone, red curtains, chair and stand-up comedy show logo on a red brick wall

With the exception of some cheeky lines inserted into the text, the ability of editors to predict the citation potential of medical manuscripts doesn’t seem that funny, at least not to me. Indeed, the research was presented as a serious paper at the 9th International Congress on Peer Review and Scientific Publication, an event co-sponsored by BMJ, JAMA, and Stanford University. The paper even lacks a witty title. When I asked Schroter about this, she responded: “we tried to make it [the paper] entertaining by mocking the editors.” I will leave it to the individual reader to determine how well their efforts succeeded.

A closer read of their paper makes me wonder whether the authors were drunk on the spirits of their own holiday cheer while drafting their conclusions, which don’t seem to coincide with their data.

Of the approximately 4,000 research manuscripts that are received by The BMJ each year, more than 80% of them are rejected without review (desk rejected), according to editorial data sent to me by Dr. Schroter. A further 7-9% of them are rejected after external review. The BMJ editorial committee rejects another 6%, leaving just 4% of remaining manuscripts advancing to publication. If BMJ editors are not very good at distinguishing the future performances of manuscripts, it may be because they are attempting to distinguish the best-of-the-best-of-the-best-of-the-show. At this level, there may not be a lot to distinguish.

Still, BMJ editors did a lot better than Schroter acknowledges. Not only could editors predict, overall, which papers would fall into low, average, and high citation groups, they did it pretty well within that 4% sliver of papers that made it to publication. In their analysis, Schroter sets up an arbitrarily high threshold of 50% concurrence and then berates her own editors for not meeting that standard.

We should emphasize that citation potential factors nowhere in The BMJ’s review process. As a result, we shouldn’t be surprised by a low prediction correspondence. Citation potential may be associated with qualities that editors do consider when evaluating papers, like novelty, evidential strength, clinical importance, among others. If BMJ editors were chiefly interested in citation potential as a way to improve their journal’s Impact Factor, then they may be better off using commercial Artificial Intelligence (AI) software to identify which submissions are more likely to be highly cited. Who needs highly-trained editors? Most of them are utterly boring at holiday parties anyway!

BMJ Christmas issue papers are funny because they draw attention to common workplace observations (e.g., “The case of the disappearing teaspoons,” or “The survival time of chocolates on hospital wards“) or provide conversational openers at holiday parties (e.g., “Wine glass size in England from 1700 to 2017,” or “Effect on gastric function and symptoms of drinking wine, black tea, or schnapps with a Swiss cheese fondue“). Other Christmas papers deal with social issues that pose potential clinical significance for the youth (“Head and neck injury risks in heavy metal: head bangers stuck between a rock and a hard bass” or for the elderly (“How fast does the Grim Reaper walk?“). Still, some are just plain silly (e.g., “Sword swallowing and its side effects“). While not all Christmas papers rise to BMJ’s standard of “light-hearted fare and satire,” they are held to the same level of rigor as other BMJ research. Perhaps this is why BMJ Christmas papers are widely used in college classes. Laughter is indeed the best medicine.

Is Schroter’s paper, an evaluation of editor’s ability to predict the citation potential of research manuscripts, even funny? Not in my opinion. And I’m a funny guy…just ask my kids.

At best, their paper reads as a witty backhand to the skill of editors to perform their expected duties. At worst, it feeds an anti-elitist, technology-promoting narrative. I wouldn’t be surprised to see this paper cited in the marketing literature for an AI-system that can better predict which manuscripts will perform better than a roomful of experienced medical editors. Wait, I’ve heard this joke before…

Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist. https://phil-davis.com/

Discussion

3 Thoughts on "Editors can’t spot talent. I’ve heard this joke before. It isn’t funny"

What is the correlation in the clinical medical space between citations and clinical impact? High citation of an article conveys its importance in the research science space. Is there broad evidence that papers that change clinical practice are highly cited? This question is often asked by my clinical colleagues. It is suggested that altmetrics may suggest clinical importance though it is not vectoral (a bad paper may have a high almteric value but will not be cited).

I’ve thought about this a bit and would love to see if the papers cited in Clinical Guidelines or updates to guidelines are highly cited papers. That would be one indication that the research is practice changing.

The perceived correlation between talent and citation potential has always struck me as problematic, so I appreciate Phil Davis’s witty critique of the BMJ piece. There are many reasons why an article might not be frequently cited, some of which might be a function of bias rather than the inherent quality of the work. Articles by independent scholars, for example, often seem to receive less attention, likely due in part to the lack of a university affiliation listed under the author’s name (I speak from personal experience here, as a musicologist who is unemployed due to multiple sclerosis). All of which is to say: many thanks again for reminding us that citation potential doesn’t tell the whole story about the value of a scholarly contribution!

Comments are closed.