I was recently given the opportunity to read a fascinating paper by Melinda Baldwin, (Books Editor at Physics Today magazine, published by the American Institute of Physics), entitled “Scientific Autonomy, Public Accountability, and the Rise of “Peer Review” in the Cold War United States” (Isis, volume 109, number 3, September 2018). Melinda is an accomplished historian of science, with a special emphasis on the cultural and intellectual history of science and scientific communication. Not only is her writing infectiously entertaining, the story itself is new, or at least it is new to me. It turns out that peer reviewing in scientific journals is a relatively recent construct, first emerging in the nineteenth century and not seen as a central part of science until the late twentieth century.

Peer review, much like shag carpet and bell-bottoms, is largely a product of the 1970s.

Melinda paints a picture of constant change in peer review, which perhaps provides a lesson for us all.  Maybe this should be obvious, but there is no status quo in academic publishing, and while we may feel our moment is more important than those that have gone before, or those ahead of us, expectations and models are fluid, be you author, reviewer, publisher, institution, or funder.

In this interview I ask Melinda to talk about her article, and provide some more personal views on peer review topics of the moment.

Tell us a little about yourself…

I’m trained as a historian of science, but my full-time job is at Physics Today, the flagship magazine of the American Institute of Physics. I edit the book reviews, manage the feature articles, and write some pieces about the history of physics. I’m also the author of Making “Nature”: The History of a Scientific Journal, which (as you might guess from the title) is about the history and development of Nature.

What led to your writing this article about the history of peer review?

I think my peer review project started when I discovered something really unexpected about Nature: that it hadn’t employed systematic external refereeing until 1973! When I first learned that, I assumed Nature was unusual, but as it turned out, a lot of commercial journals did not consult referees about every paper they published until well into the 1970s and even the 1980s. That seemed especially true outside the US. I didn’t have the space to explore that issue fully in my book on Nature, but as I wrapped up that project I knew I wanted to write more about the history of peer review.

What are the key highlights of your study that you would like readers of The Scholarly Kitchen to carry with them? Are there any stories you would particularly like to highlight?

One major takeaway point that I think might surprise Scholarly Kitchen readers is that peer review is much, much younger than we usually assume. There’s this story about Henry Oldenburg, the first editor of the Philosophical Transactions of the Royal Society of London, that claims he was the first person to consult external referees. Which would suggest that peer review has been part of scientific publishing ever since the first scientific journal.

But it turns out that’s not really true. The referee system as we know it today first started to take shape in the nineteenth century, and it developed very slowly and haphazardly from there. Refereeing was most common in Anglophone countries and among journals that were affiliated with learned societies like the Royal Society of London. Well into the twentieth century, commercial journals and journals outside the English-speaking world tended to rely on editorial judgment instead of referee opinions.

One of my very favorite anecdotes about the history of refereeing is the famous story of Einstein going through peer review at The Physical Review in 1936. He and his collaborator Nathan Rosen submitted a paper on gravitational waves that had some controversial conclusions, and the editor, John Tate, sent it out for an external opinion. Einstein was incredibly offended! He told Tate that he was withdrawing the paper because he had not authorized Tate to send it to anyone else before it was published.

If we think about Einstein’s past publishing experience, though, his shock makes sense. He was used to the German system in which editors like Max Planck evaluated and chose papers themselves. Also, Einstein’s previous submission to Physical Review had not been refereed — not every paper was sent out for referee opinions, only the ones that seemed controversial or possibly questionable. So the story really highlights the fact that peer review is not this unchanging part of science that everyone has agreed on since the seventeenth century.

You trace the establishment of peer review in scientific journals and funders to the Cold War  — can you talk a little about this, perhaps providing us with your favorite stories?

So I should say that refereeing — the system of sending a paper or grant proposal out for external opinions from fellow experts — existed much earlier than the Cold War. Refereeing started cropping up at academic societies in the nineteenth century. But the term “peer review,” along with the idea that research has to be peer-reviewed to be scientifically legitimate, is something that first arises in the late twentieth century United States.

In my recent paper for Isis, I argue that the idea that a grant or journal article has to be peer-reviewed to be scientifically respectable arose as scientists grappled with the consequences of increased public funding for their work. There were a number of observers who wanted scientists to be more accountable to legislators and members of the public because they were receiving public money.

Scientists, however, didn’t really want congressmen weighing in on which grant proposals they liked best. So scientists pushed the idea that peer review — evaluation by experts — was the only legitimate way to distinguish good science from bad science. The term “peer review” also dates from the Cold War era, and I think it’s for exactly this reason. “Refereeing” is a slightly more general term, but if you call the evaluation process “peer review,” it establishes a much narrower range of people who are qualified to perform it

A lot of these issues about scientific autonomy versus public accountability came to a head in the mid-1970s in the wake of a major economic crisis. There was a trio of legislators — Republican Congressmen Robert Bauman and John Conlan, and Democratic Senator William Proxmire — who started attacking the National Science Foundation’s (NSF) funding on the grounds that it was giving money to projects that were frivolous and wasteful. All three of them thought that the NSF should be reined in and that there should be more Congressional oversight of their grant awarding process.

In response to the controversy, a House subcommittee held a hearing on the NSF’s peer review process in July, 1975. The result is this fantastic 1000-page document filled with scientists, historians, sociologists, Congressmen, and NSF employees talking about what they thought peer review was for and why it was important for science. The hearings make it clear that the NSF was relying on the idea that peer review is crucial to good science to justify rejecting some of the proposed Congressional oversight. And it worked — the NSF had to make some changes to the way it handled referee reports, but the congressional oversight proposals were dropped.

As the idea that peer review is central to “good” science took hold, journals and grant agencies found themselves under more and more pressure to rely on referee opinions in order to be considered legitimate. You can see that pressure in discussions about peer review outside the US, where editors and scientists in the 1970s and 1980s occasionally expressed bafflement about how much Americans cared about peer review. One of my favorite editorials on that subject is from the British medical journal the Lancet — there’s a 1989 piece that complains “in the United States far too much is being asked of peer review,” and the editorial board assures the readers that “reviewers are advisers not decision makers” at the Lancet. At the same time, however, there was concern among non-US editors that their journals wouldn’t be considered respectable in the US unless they started doing more to consult external referees.

Did peer review for funding agencies develop in tandem with peer review for scientific journals, or was there an independent path?

Refereeing at funding agencies developed largely independently of refereeing at journals — in fact, refereeing was comparatively rare at funding bodies well into the twentieth century, which is why I think there hasn’t been a lot of historical work done on refereeing systems at funding organizations. But even though journals and funding bodies developed their refereeing systems largely separately, both types of institutions got caught up in this Cold War moment when the scientific community found itself called to justify its influence over scientific funding.

How should publishers do more for researchers to help authors, and to help reviewers understand their role as a reviewer and be recognized for their work?

One of the weird things about peer review as it was conceptualized during the Cold War is that it wasn’t supposed to confer any rewards on referees — that’s a feature, not a bug. At the 1975 NSF peer review hearings, a lot of witnesses talked about the referee as a selfless person devoted to the good of science, someone whose identity had to remain secret so they could be candid and not face any professional reprisals for their work. The flip side of that is that referees also didn’t get any recognition — they were doing it for the good of science, not personal gain.

But I don’t think good feedback is necessarily incompatible with having recognition for referees. I think one major step that publishers should consider is paying referees an honorarium. Many economics journals do that, and just based on anecdotal evidence (my partner is an economist) it seems like it improves turnaround time and the level of detail and engagement from referees. In the humanities, paying referees would also acknowledge the current realities of the job market: a lot of experts whose opinions we’d like are adjunct faculty members or independent scholars with tenuous incomes, or people with nonacademic jobs whose bosses may not be thrilled to see them reading a paper during work hours. If we want to continue to benefit from the expertise and experience of scholars who aren’t in tenure-track positions, I think it’s going to be important to pay for their labor.

The problem with that, of course, is that it risks creating a system where only non-tenure-track scholars perform reviews — if it’s not seen as a professional obligation, busy faculty members might decide the money isn’t worth their time. So there may need to be other incentives — for example, maybe a certain number of authors on a paper need to have reviewed for that journal before it will consider their submission

Do you have thoughts about the future of peer review for authors, reviewers and publishers as a result of your study?

Like most historians, I’m a little reluctant to make predictions about the future, or to prescribe future courses of action based on what’s happened in the past. But I think looking at peer review’s history does highlight some important things for people who are frustrated with the way publishing and funding currently works. First, peer review isn’t something that’s been an unchanging part of science since the Scientific Revolution. It’s a process that has changed over time in response to the scholarly community’s needs — and one that could change in the future.

I also think that one of the reasons a lot of researchers are frustrated with peer review right now is that we’re asking too much of it. In the 1970s, peer review was cast as a crucial process that rewarded good science and corrected bad science — but in practice, we feel like it doesn’t do either of those things very well. We may need to reconsider what we can realistically expect from referees. On the other hand, peer review also helps secure a certain amount of autonomy for the scholarly community, which I think is an underappreciated function and one that scholars might want to preserve as they start thinking about new paths forward.

Robert Harington

Robert Harington

Robert Harington is Chief Publishing Officer at the American Mathematical Society (AMS). Robert has the overall responsibility for publishing at the AMS, including books, journals and electronic products.


6 Thoughts on "The Rise of Peer Review: Melinda Baldwin on the History of Refereeing at Scientific Journals and Funding Bodies"

Great and informative article. I remember the NSF hearings and the political attacks it sustained. Not unlike the current climate No pun intended. I would add that the Cold War was coined in 1947 and world events viewed through that lens until the collapse of the USSR,

I think one major step that publishers should consider is paying referees an honorarium.

The concern here is that this would cause a system that is already seen as too expensive to become even more so (one assumes these fees would be paid either by subscribers or authors via APCs). Also, it would favor the larger commercial publishers that would have more wiggle room within their significant profit margins to offer higher reviewer fees than would the smaller, independent publishers with lesser margins, resulting in further consolidation of the market.

So there may need to be other incentives — for example, maybe a certain number of authors on a paper need to have reviewed for that journal before it will consider their submission

You can’t publish in the top journals unless you’ve already been asked by the editors to peer review for them? How would this work for a new scholar in a field? Would it create a closed loop of authors within a journal, no new entrants allowed in? Would you then add in other authors who had nothing to do with the work, just to get into your journal of choice?

yes: some remuneration for non-tenure track reviewers.
no: requiring prior review service for publication[or even grant] submission eligibility.

…… well written David

LAUGHING STOCK Populations that we deem “college educated” are, in general, able to understand subtleties in the titles of research projects that Proxmire turned into laughing stock. Proxmire’s arguments were able to flourish in the broad political arena because the target population had a bell-curve distribution of which the college educated formed but a small part. However, although less broadly based, the peer-review population has its own bell curve and there are those who readily turn to laughing stock projects or papers with subtleties beyond their ken. Separating the wheat from chaff remains problematic. The intrinsic error-proneness of peer review systems should have been factored into their designs. No wonder Einstein was angry!

Tenure track scholars are already awarded for peer review, through the service requirements of their tenure evaluations. We could make this more explicit. We could assign them ranks or scores based on how well or often they execute their duties.

I don’t think there are a lot of non-tenure track scholars doing peer review. It tends to be a very closed club where editors pick people they know and people with big reputations in their fields of study. That means people who have publishing expectations at their institutions. However, if we base the choice of peer reviewer on the scores of how well they perform their duties, it doesn’t matter if it’s mostly adjuncts doing it as long as they’re doing it well.

Only allowing submissions from people who have been reviewers for the journal in the past is going to eliminate all submissions from new scholars and new to the field scholars. That’s terrible for science.

I agree that only allowing submissions from people who have been reviewers for the journal in the past is an awful idea. However, I disagree with the statement that there are not a lot of non-tenure track scholars performing peer-review. I am a postdoc and have reviewed manuscripts for 18 different journals so far, many for which I have reviewed multiple manuscripts. The majority of these journals are well-regarded (e.g. Proceedings of the Royal Society of London B), although a few are smaller, more specialist journals. I also want to clarify that I am not being given a manuscript by my PI to review and then he submits the review. The associate editors contact me directly, possibly after I was recommended by my PI or another scientist in my field, but nonetheless, I am the one being contacted to act as a referee. Other postdocs I know have had similar experiences.

Comments are closed.