Editor’s Note:  Today’s post is by Seth Denbo, Director of Scholarly Communication and Digital Initiatives at the American Historical Association.

Open peer review hasn’t caught on in the humanities. Nearly ten years ago, a few notable experiments attracted the attention of the New York Times. The “Web Alternative to the Venerable Peer Review,” as the headline in the print edition on August 24, 2010 dubbed it, was presented as an innovation that would revolutionize the way scholars evaluated each other’s work. Breathlessly excited about the potential of web-based open review for “generating discussion, improving works in progress, and sharing information rapidly,” the Times contrasted this with what was presented as the purely “up-or-down judgment” of customary review practices. Openness was said to be central to the attractiveness of these new forms of peer review.

Open white gift box tied with red ribbon

Flash forward to the present, and little widespread change in humanities peer review has occurred. Many articles on peer review have pointed out that the systematic practices we think of as central to scholarship and scholarly communication evolved as recently as the mid-20th century. Melinda Baldwin has written on how peer review did not come to be seen as necessary for scholarly legitimacy until the Cold War. Ben Schmidt has shown that the phrase “peer review” doesn’t enter the lexicon until the 1970s. Despite the relative recent emergence of systematic practices, peer review is central to scholarship. And, within a range of different ways of organizing review and masking the identity of author, reviewer, or both, a more-or-less closed process still dominates in humanities journals and book publishing. These long-standing practices still seem to provide editors with the evaluation they require to maintain quality and the feedback that assists authors in improving their work. Alex Lichtenstein, editor of the American Historical Review (AHR), recently wrote “as an editor I especially value the developmental as well as evaluative role” provided by the current double-blind peer review practices and structures that he directs as editor.

Despite his commitment to double-blind review, Lichtenstein is overseeing the AHR’s first foray into experimenting with open review. “History Can be Open Source: Democratic Dreams and the Rise of Digital History” by Joseph L. Locke and Ben Wright is currently posted on ahropenreview.com for an open, public comment period that will run until early April. In parallel, the editors have invited several reviewers to submit more traditional peer reports. Those reviewers have been given the option of anonymity, but their reviews will be public.

Peer review practices vary between the sciences and the humanities, and even by discipline within those domains. Fittingly, given the relative preeminence of books in many humanities disciplines, where reputations and careers hang much more on monographs than on journal articles (which are often seen as a step on the way to the book), the earliest humanities experiments in open review were book manuscripts. Kathleen Fitzpatrick, now Director of Digital Humanities and Professor of English at Michigan State University, posted the manuscript of her book Planned Obsolescence: Publishing, Technology, and the Future of the Academy online for open comment in 2009, modeling new approaches to digital scholarly communication explored in the book.

Experiments of this kind have often been conducted by academics like Fitzpatrick with a scholarly interest in new media or digital humanities. When the prestigious Shakespeare Quarterly trialed open review in 2010, it was for a special issue on the Bard and new media edited by Katherine Rowe, a leading digital humanist. While the organizers and participants generally saw the process as beneficial, it has not been repeated. In 2011, Jack Dougherty and Kristin Nawrotski edited Writing History in the Digital Age — a book about how the Internet has changed the way historians teach, write, research, and publish — and the editors invited contributors to share their drafts for open comment on the web. The German digital humanities journal ZfdG, one of the few with a standard open review policy, allows authors an interesting choice. Contributors can opt for either a double-blind evaluation process before their article is published, or open, post-publication review with subsequent opportunity to revise the article. The Programming Historian, a highly innovative publication that provides online tutorials on digital tools and methods for historical research, also uses open review as part of their regular editorial practice. Palgrave Macmillan’s medieval cultural studies journal postmedieval is one of very few humanities journals that has used open review more consistently. Reviews for several special issues of the journal tackling a diversity of topics have been subject to what they call crowd review experiments since 2011.

Journals in the sciences, by contrast, have moved much deeper into the use of open online review. When Ann Michael asked several of the Scholarly Kitchen Chefs about the future of peer review for a 2017 post, Michael Clarke described a state of “fecund experimentation and roiling debate,” including journals exploring various forms and levels of openness. Some scientific journals use open reviews of experimental design to try to improve methodologies and research quality before any experiments are conducted. While there is little agreement about what open peer review is and almost as many different practices as there are journals doing it — a 2017 F1000 Research article identified over 120 different definitions — it has become an important enough part of the landscape that it has generated its own studies. A number of high prestige medical journals, such as the British Journal of Surgery, BMJ, and JAMA, have conducted studies into the effectiveness and quality of various forms of open peer review. But defining quality is a complex and often subjective process.

Why, then, have humanities journals and scholars not taken up open review practices in more than a few notable instances? Openness as both value and practice has infused the discourse around scholarship and the communication of ideas among many in the humanities, but old practices die hard. Is it that the current models meet the needs of these disciplines? Or is it merely a resistance to change? One of the primary complaints about peer review among scientists is the lack of consistency. Most humanities disciplines are comfortable with, and even rely upon, certain kinds of subjective judgments by scholars. It could be that with this outlook, authors and editors in the humanities feel less keenly the necessity to reform review to prevent inconsistency between reviewers.

And there is some evidence that there are benefits to double-blind review that help with issues of equity and inclusion. A 2017 study published by Critical Inquiry suggests that journals that maintain double-blind review practices have a broader base of authors (at least as defined by the institution at which the authors received their PhD and where they are employed). Another study in computer science showed that single-blind reviewing confers advantages to well-known authors and those from prestigious institutions. Researchers have studied and documented the extent of gender bias when peer review is not double blind, showing that shown that papers authored by women are accepted at lower rates than those by men. The evidence should make editors and scholars pause and consider the benefits of blind review.

Good peer review has never just been gatekeeping. In the humanities, reader’s reports are often longer than this blog post. Reviewers engage with the arguments, refer the author to other literature and sources they may have missed, and provide developmental editing suggestions. This leads to better research and better publications. There’s no reason why this can’t happen in an open environment, and in the case of many of the experiments listed above, it has. But any innovation should ensure that we preserve the good of the way things have been done, and take advantage of new technologies, practices, and processes in ways that promote excellence in scholarship.

Discussion

19 Thoughts on "Guest Post — Open Peer Review in the Humanities"

If only the sciences were as open to experimentation in peer review as you suggest! You reference the F1000 group. They are the gold standard. The remainder fo the field, sadly, lags far behind. Don’t look for the hard sciences to lead the way. I’m rooting for the humanities.

“These long-standing practices still seem to provide editors with the evaluation they require to maintain quality and the feedback that assists authors in improving their work.” So we “provide editors” and “assist authors.” But what about assisting those of us who undertake, usually for free, the task of doing the actual reviewing – that is reading a paper and checking its reference list?

Between 2013 and 2018 we could tap into non-anonymous, post-publication reviews, by accredited reviewers that were provided by the NCBI (“PubMed Commons”). Thus, there was a step between reading the abstract of a cited paper in a reference list and actually reading the cited paper itself. The labour of reviewing was decreased and we busy reviewers were more likely to say “yes” when that invitation to review arrived. Those were the days!

It’s not entirely clear how PubMed Commons (which was discontinued due to lack of participation and interest from the research community) would have solved the problems of recruiting and assisting reviewers, nor how adding an additional post-publication review process would result in decreased labor. Further, if, as a researcher, I see an interesting abstract, it’s unlikely that I’m going to turn to the peer reviews on that paper before reading the actual paper itself.

Well there were 7000 posted non-anonymous comments many of them from folk with credentials in the research community. In addition to assisting us reviewers, PubMed Commons could provided feedback to editors who had initially invited pre-publication reviewers (A, B, and C) and subsequently encountered erudite post-publication commentary from another source (D) – perhaps someone whose expertise the editors were unaware of. I suspect a very small $ contribution from a publisher collective would persuade the NCBI to give PubMed Commons another chance.

7,000 comments out of how many papers? What percentage of uptake does that represent, and was there any data released on actual readership of those comments?

If you’re asking editors to search for and find reviews on papers they’ve published or rejected, read them, adjust their policies and add them to their reviewer databases, that seems like quite a time investment. Further, I’m not sure on the legal aspects of private sources paying for a government resource, but as with all things financial, the costs would ultimately be borne by the journals’ subscribers or authors, in an environment where costs are already seen as excessive. I’m not convinced this is the most important area where investment is needed at the moment.

That is what made PubMed Commons so helpful. An editor of a weekly or monthly journal could, a few months after its publication, flip down the PubMed listing of abstracts for his/her journal and then click on the PubMed Commons icon to ascertain the existence of post-publication commentary on the papers he/she had accepted. That would take a few minutes. Reading the comments could take some time, depending on their number and the editor’s expertise.

With over 3,000,000 articles published per year, I remain skeptical that 7,000 voluntary extra reviews (on all articles from all time) are going to be particularly helpful in a broad sense, even if our already overburdened editors could find the time.

As for PubMed Commons being “discontinued due to lack of participation and interest from the research community,” it might be wise to factor-in possible political pressures from those who did not appreciate flaws in their papers being brought to light by post-publication reviews while their laboratories (and the funding agencies) were celebrating their publications in Cell or Nature!

I think that’s a reasonable caveat for any peer review system that requires the reviewer to be identified. In my opinion, anonymity is an essential part of the process, allowing one to speak truth to power without fear of retribution.

You are describing the MO of PubPeer. The distinctive feature of PubMed Commons was that many of the established non-anonymous contributors were well-positioned to provide commentaries, buttressed by their reputations among those knowledgeable in their fields, to speak their minds without fear of retribution. Thank you David for this interesting discussion.

Part of the work of a good editor is to discard or edit out any unnecessary rudeness from peer reviewers (and to make those reviewers aware that they have crossed a line). Yet another argument in favor of moderated pre-publication peer review over the wild west of the open internet.

Thanks for your comment Donald. You make a good point about the labor that goes into reviewing—as you say usually for free and also undervalued when it comes to getting credit for scholarly work. The labor that researchers and scholars contribute makes the system work, and without it peer review couldn’t happen. I definitely could have addressed that aspect of it better in the piece, and it’s interesting that you make the point that open systems can help reviewers since it distributes the work (or at least that’s what I think you’re saying). But the work involved in giving detailed, expert feedback I think is also one of the barriers to open review, because few of us have the time to take on things like this unless directly asked by an editor.

It’s hard to entirely separate conversations around open peer review with the ever-crumbling job market for humanities scholars and the decreasing resources offered to the humanities generally from academia. I imagine open peer review could make scholars facing a scarcity of job/funding opportunities particularly nervous.

It is disturbing to read a view where Humanities are “behind” in experimenting peer review, while historically they have been to the forefront. Let me just point out two examples :
1/ Book review: one of the oldest and most practiced form of open peer review. You know the name of the book author and the one of the reviewer, it is published in journals and often the place for a hot debate. Think about Alice Goffman’s book, On the Run and the intense affair around its status and the status of “proof” and “evidence” in ethnography and anthropology https://en.wikipedia.org/wiki/Alice_Goffman

2/ The first known journal experimenting with open peer review is Current Anthropology, which can be classified as a Humanities journal, in 1959-1960. It was then called “open commentary” but inviting 15 reviewers to openly comment a text, publish everything in the journal with authors’ answers is pretty much OPR/ See https://hal.archives-ouvertes.fr/hal-01143310/document

With these two examples in mind, I don’t share the idea of a lagging Humanities field. To my knowledge, there are no studies on the “demography” of OPR in different disciplines, contrary to double (more SSH) and single-blind (more STM), but even in STM, regular OPR almost only happens in “marginal journals” or “new publishers” (à la F1000), rather than established houses and periodicals. In the latter one, the practice will be part of “new journals” such as BMJ Open.
Same with humanities with journals such as Ada in gender studies https://adanewmedia.org/.
Even if there were more critical comments about OPR in SSH than STM (see https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0189311), there is already a rich and long story of OPR in diverses Humanities communities.

Didier is right regarding “Current Anthropology” where Saul Tax has implemented OPR without Internet. I called him “the father of OPR”

In a review of Kathleen Fitzpatrick’s book which appeared in The Journal of Scholarly Publishing 44:1 (2012) I wrote the following about her discussion of peer review:

The critique of traditional peer review in chapter 1 is especially thought
provoking. Fitzpatrick scores some solid hits against the kind of peerreview
process that still dominates the evaluation of scholarship today.
For instance, as she observes, when only one reviewer raises a criticism,
one doesn’t know whether this is representative of a general problem
or merely reflects an idiosyncratic reaction of one reader. In sketching
an alternative system for peer review, she emphasizes that its success
depends on ‘prioritizing members’ work on behalf of the community’
(43). Reputation in such a system requires some way of reviewing the
reviewers. The ability to publish might therefore be based on measurements
of how helpful a scholar is in participating in group discussion.
A system like this will ‘require a phenomenal amount of labor’ as well
as a new set of metrics for reviewing the reviewers, but if ‘reviewing
were a prerequisite for publishing, we’d likely see more scholars become
better reviewers, which will in turn allow for a greater diversity of opinion
and a greater distribution of the labor involved’ (46). This is definitely a
promising approach as it directly addresses the chief challenges to a
system of crowd review—namely, the incentives for participation and
the poor quality of comments offered.

P.S. A point I did not discuss in this review is why monograph reviewing rarely employs the double-blind approach that prevails in journal peer reviewing.

I have not moved to open review in the journal I edit. Soliciting a couple of closed reviews is hard enough, and we need to moved to a single, final article copy as soon as possible. Open review would mean posting the initial paper, then reviews, then posting the revised version. And what benefit is there really, from readers seeing review content, some of which is picky? Just not possible without resources, and I do not know how having multiple versions squares with the services that pick off the DOI number, like Scopus – do they amend their metadata if a new version emerges? On the question of closed reviews being potentially vindictive: yes, but as editor, I do not pass on inappropriate comments to the authors.

I once myself published a paper that had open review, but I never got around to addressing the comments and when I did, the journal had been bought out and ceased allowing changes! The Winnower, bought by Authorea. Never bothered subscribing to the latter, it is not what I was looking for.

Comments are closed.