Recent discussions about peer review brought me back to thinking about Cathy O’Neil’s book, Weapons of Math Destruction, reviewed on this site in 2016. One of the complaints about peer review is that it is not objective — in fact, much of the reasoning behind the megajournal approach to peer review is meant to eliminate the subjectivity in deciding how significant a piece of research may be.

I’m not convinced that judging a work’s “soundness” is any less subjective than judging its “importance”. Both are opinions, and how one rates a particular manuscript will vary from person to person. I often see papers in megajournals that are clearly missing important controls, but despite this, the reviewers and editor involved judged them to be sound. I’m not sure this is all that different from asking why some reviewer thought a paper was significant enough to be in Nature. Peer reviews, like letters of recommendation, are opinions.

Discussions along these lines inevitably lead to suggestions that with improved artificial intelligence (AI), we’ll reduce subjectivity through machine reading of papers and create a fairer system of peer review. O’Neil, in the TED Talk below, would argue that this is not likely to happen. Algorithms, she tells us, are not objective, true, or scientific and they do not make things fair. “That’s a marketing trick.”

As we move into an era of AI, how much judgement should we be turning over to algorithms?

David Crotty

David Crotty

David Crotty is a Senior Consultant at Clarke & Esposito, a boutique management consulting firm focused on strategic issues related to professional and academic publishing and information services. Previously, David was the Editorial Director, Journals Policy for Oxford University Press. He oversaw journal policy across OUP’s journals program, drove technological innovation, and served as an information officer. David acquired and managed a suite of research society-owned journals with OUP, and before that was the Executive Editor for Cold Spring Harbor Laboratory Press, where he created and edited new science books and journals, along with serving as a journal Editor-in-Chief. He has served on the Board of Directors for the STM Association, the Society for Scholarly Publishing and CHOR, Inc., as well as The AAP-PSP Executive Council. David received his PhD in Genetics from Columbia University and did developmental neuroscience research at Caltech before moving from the bench to publishing.


4 Thoughts on "Algorithms Are Opinions Embedded in Code"

This is a useful acknowledgement that the use of algorithms does not necessarily provide an increase in “objectivity.” Support for improvement of the application of clinical and research expertise to the peer review process — recognizing that there will always be subjective and objective components — with algoriths or other tools are both needed and welcome.

This is foolish. What blind faith in big data? This is a straw man argument: find someone stupid, or make up such a person, and then argue that that person is stupid. It’s a perfectly circular rhetorical construction. Algorithms are based on human biases–and you are telling me that that is news????

And yet…

Peer review has its flaws. Human beings (even scientists) are biased, lazy, and self-interested. Sometimes they suck at math (even scientists). So, perhaps inevitably, some people want to remove humans from the process—and replace them with artificial intelligence. Computers are, after all, unbiased, sedulous, and lack a sense of identity.

Some also believe that removing the human aspect from the process would help eliminate tensions among authors, reviewers, and publishers.

However, a large portion of peer review is subjective, and reviewers are often demonized as evil gatekeepers. Would artificial intelligence bring greater objectivity to peer review?

Comments are closed.