In 2016 the American Association of University Presses released a handbook of Best Practices for Peer Review.  Focused on books, the handbook is intended “as a resource for member publishers, acquisitions editors both new and experienced, faculty editorial boards, scholarly authors and researchers, and new scholarly publishing programs.”  Today’s guest blogger is Catherine Cocks, Senior Acquisitions Editor, University of Washington Press, explores how peer review practices will and are shifting to accommodate long-form digital scholarship.

In contrast to our colleagues in STEM journal publishing, HSS book editors aren’t facing concerted demands to transform peer review. The vast majority of scholarly monographs still go out to two subject experts in a single-blind process managed by the acquisitions editor. But multiple experiments in creating platforms for multimedia scholarly work, a number of them funded by the Mellon Foundation, are underway. These new platforms will bring with them revised workflows. How might they affect the process of peer review?

digital book

I put this question to four people currently working on digital platforms, three concentrating on books, and for a different vantage, one on journals:

Currently, the impact on peer review largely depends on how much the scholarship in question diverges from the traditional print monograph in form and audience. At the four university presses, peer review hasn’t changed much. Typically, proposals for digital projects go to peer review and then to the editorial board for approval, just like print projects up for advance contract. Then complete drafts go back out to peer review before being submitted to the board for final approval.

Yet at both the proposal and full draft stage, digital projects raise design and technical issues that the venerable book format resolved long ago. How well the author has constructed and designed the project must be evaluated along with the content. Are such evaluations the job of the external reviewers? At Michigan and Minnesota, the answer is mostly no. Mary notes that, although her press requires authors to submit a “proof of concept” along with the standard proposal for peer review, the press is part of the library and its IT specialists vet project interfaces and back-end functionality. This evaluation is more akin to the developmental and production work press staff regularly perform for print projects than peer review.

Presses without readily available IT specialists might turn to Minnesota’s Manifold, which was conceived as a platform for long-form scholarly arguments supplemented by additional resources, such as audio and video files. Because it is meant to be scalable, replicable, and easy for press production staff to use, Manifold provides the functionality and the user interface. As a result, as at Michigan, the peer reviewers are selected primarily for their subject matter expertise, just as they are for conventional monographs.

Currently, the impact on peer review largely depends on how much the scholarship in question diverges from the traditional print monograph in form and audience.

Washington and Stanford have augmented or plan to augment traditional peer review to reflect the nature of the projects they’re working on. Stanford requires evidence that the project’s design is viable at the proposal stage. Proposals must include a prototype of the project that conveys the design strategy of the final project and one or two fully built-out sections of the project. Proposal and prototype go out to three or four reviewers: two are disciplinary specialists, one of whom ideally is familiar with digital projects, and one or two additional reviewers are format specialists. All the reviewers are asked some basic usability/design questions. Friederike also requires authors to submit write-ups guiding reviewers through these genre-busting projects.

Washington’s digital project, in development for a platform being created in partnership with UBC Press and others, is a collaboration between a scholar and members of an Indigenous community. Because it’s intended to be a resource for both that community and academics, reviewers will include individuals from the community who are not among the co-creators as well as scholars recruited by Larin to evaluate the project’s scholarly and intellectual merits. All the reviewers will address issues of usability, though their expectations for and paths through the material may be very different.

Just as at Minnesota and Michigan, a lot of the back-end evaluation happens in-house or among the project’s partners. Stanford staff evaluate the sustainability and archivability of every project and formulate recommendations or requirements. Like Mary at Michigan, Friederike regards the technical evaluation not as a peer activity but as part of the editorial development process, much like copyediting. With partner institutions taking primary responsibility for the technical and design work, Washington is focusing mainly on the conceptual development and community engagement aspects of its project.

Things are quite different at Kairos, a long-running online journal in digital writing studies, helmed by Cheryl and Doug Eyman. (Though Cheryl is also developing a publishing platform, Vega, that will address some of the production-side challenges that multimedia-rich publishing venues have faced, we didn’t discuss it.) Here, peer review looks a lot like an extremely thorough version of the developmental editing typically done by acquisitions editors and series editors, plus the content vetting performed by external reviewers. Kairos does not offer a design template, so authors develop their concept or prototype through multiple rounds of review performed by varying subsets of the journal’s large editorial board. These reviews are completely open, partly because it is logistically difficult to scrub author identities from digital files and partly because scholars in this field are already committed to collaborative editing. The large number of reviewers ensures that scholarly content, technical aspects, and design all get thoroughly evaluated. Kairos derives its success from a unique combination of the technical, scholarly, and social infrastructures necessary for generating sustainable digital scholarship in the humanities (see Cheryl’s 2015 essay with Doug in Rhetoric and the Digital Humanities for more on this topic).

The experiences of the four presses I’ve described here offer other ways of building and connecting these elements. As the new platforms develop and are more widely used, the distinction between the labor of press staff and that of external reviewers may blur or shift. However, peer review in its traditional single-blind form seems robust enough to accommodate many of the new formats, though complementary forms of evaluation, from the editorial vetting of prose to the evaluation of production challenges or the input of community members engaged in collaborative projects, are also often necessary—as they always have been.

Karin Wulf

Karin Wulf

Karin Wulf is Director of the Omohundro Institute of Early American History & Culture and Professor of History at the College of William & Mary. She is a scholar of early American and Atlantic history working on gender, family and sexuality.

View All Posts by Karin Wulf

Discussion

3 Thoughts on "Does Born-Digital Mean Rethinking Peer Review?"

Excellent, thanks so much, Karin, for focusing some attention on peer reviewing for book projects–which has so seldom been discussed on this blog–and thanks to Catherine for cabvassing a range of different innovative digital projects and their methods of peer review. We need more of this kind of posting on TSK.
P.S. An associated question, which proved a problem for the pioneering Gutenberg-e and ACLS Humanities Ebook projects, was the willingness of scholarly journals to review such nontraditional digital projects. Perhaps that could be covered in a future post.

In my experience, university presses have great people managing the book development side. But what is totally missing from all but the very top presses is a proactive response to mistakes in published books. Publishers are failing to use the advantages of the internet to help their readers.

This was brought home to me recently when I counseled a student with pre-existing mental health issues who was literally shaking in terror because she couldn’t understand a textbook. I checked it out. It had a misprint in a difficult equation. That textbook is still on sale today, uncorrected, with no erratum page on the publisher’s website. Don’t tell me the publishers don’t know, there’s a torrent of 1-star reviews on Amazon and social media complaining about it.

And that’s just a minor, common kind of slip-up. Some years ago a university press published an entire book based on fraudulent research. These things happen, but years on that withdrawn book still gets cited by people who don’t realize-where’s the warning about this on their website? Right now, I can view 85 libraries on Worldcat that have this book in stock-why have they never contacted librarians to have the book pulled? Even just marked with a warning sticker on the front? I’ve seen journals that paywall their erratum lists like it’s an ordinary article. I mean, good grief, if there’s one thing that should be easily accessible…

I’ve put this to people and I hear answers like “we review everything very carefully.” Fine, I believe you. But lots of books have a misprint somewhere, and many have more than that. It’s time for every publisher to commit to having a quick-turnaround, easily accessible directory of erratum lists, and to emailing purchasers immediately if known to notify them of major mistakes.

As a former university press director, I agree with you completely.

Comments are closed.