In 2016 the American Association of University Presses released a handbook of Best Practices for Peer Review. Focused on books, the handbook is intended “as a resource for member publishers, acquisitions editors both new and experienced, faculty editorial boards, scholarly authors and researchers, and new scholarly publishing programs.” Today’s guest blogger is Catherine Cocks, Senior Acquisitions Editor, University of Washington Press, explores how peer review practices will and are shifting to accommodate long-form digital scholarship.
In contrast to our colleagues in STEM journal publishing, HSS book editors aren’t facing concerted demands to transform peer review. The vast majority of scholarly monographs still go out to two subject experts in a single-blind process managed by the acquisitions editor. But multiple experiments in creating platforms for multimedia scholarly work, a number of them funded by the Mellon Foundation, are underway. These new platforms will bring with them revised workflows. How might they affect the process of peer review?
I put this question to four people currently working on digital platforms, three concentrating on books, and for a different vantage, one on journals:
- Susan Doerr, assistant director, digital publishing and operations director at the University of Minnesota Press, who is overseeing the development of the Manifold platform;
- Mary Francis, editorial director at the University of Michigan Press, where she is editorial lead for its Fulcrum platform;
- Larin McLaughlin, editor in chief at the University of Washington Press and acquiring editor for one of the inaugural projects for UBC Press–led initiative creating digital platform for Indigenous studies;
- Friederike Sundaram, acquisitions editor for Stanford University Press’s digital projects program;
- Cheryl Ball, editor of the journal Kairos, co-PI of the Vega publishing platform, and director of the Digital Publishing Institute a West Virginia University Libraries.
Currently, the impact on peer review largely depends on how much the scholarship in question diverges from the traditional print monograph in form and audience. At the four university presses, peer review hasn’t changed much. Typically, proposals for digital projects go to peer review and then to the editorial board for approval, just like print projects up for advance contract. Then complete drafts go back out to peer review before being submitted to the board for final approval.
Yet at both the proposal and full draft stage, digital projects raise design and technical issues that the venerable book format resolved long ago. How well the author has constructed and designed the project must be evaluated along with the content. Are such evaluations the job of the external reviewers? At Michigan and Minnesota, the answer is mostly no. Mary notes that, although her press requires authors to submit a “proof of concept” along with the standard proposal for peer review, the press is part of the library and its IT specialists vet project interfaces and back-end functionality. This evaluation is more akin to the developmental and production work press staff regularly perform for print projects than peer review.
Presses without readily available IT specialists might turn to Minnesota’s Manifold, which was conceived as a platform for long-form scholarly arguments supplemented by additional resources, such as audio and video files. Because it is meant to be scalable, replicable, and easy for press production staff to use, Manifold provides the functionality and the user interface. As a result, as at Michigan, the peer reviewers are selected primarily for their subject matter expertise, just as they are for conventional monographs.
Currently, the impact on peer review largely depends on how much the scholarship in question diverges from the traditional print monograph in form and audience.
Washington and Stanford have augmented or plan to augment traditional peer review to reflect the nature of the projects they’re working on. Stanford requires evidence that the project’s design is viable at the proposal stage. Proposals must include a prototype of the project that conveys the design strategy of the final project and one or two fully built-out sections of the project. Proposal and prototype go out to three or four reviewers: two are disciplinary specialists, one of whom ideally is familiar with digital projects, and one or two additional reviewers are format specialists. All the reviewers are asked some basic usability/design questions. Friederike also requires authors to submit write-ups guiding reviewers through these genre-busting projects.
Washington’s digital project, in development for a platform being created in partnership with UBC Press and others, is a collaboration between a scholar and members of an Indigenous community. Because it’s intended to be a resource for both that community and academics, reviewers will include individuals from the community who are not among the co-creators as well as scholars recruited by Larin to evaluate the project’s scholarly and intellectual merits. All the reviewers will address issues of usability, though their expectations for and paths through the material may be very different.
Just as at Minnesota and Michigan, a lot of the back-end evaluation happens in-house or among the project’s partners. Stanford staff evaluate the sustainability and archivability of every project and formulate recommendations or requirements. Like Mary at Michigan, Friederike regards the technical evaluation not as a peer activity but as part of the editorial development process, much like copyediting. With partner institutions taking primary responsibility for the technical and design work, Washington is focusing mainly on the conceptual development and community engagement aspects of its project.
Things are quite different at Kairos, a long-running online journal in digital writing studies, helmed by Cheryl and Doug Eyman. (Though Cheryl is also developing a publishing platform, Vega, that will address some of the production-side challenges that multimedia-rich publishing venues have faced, we didn’t discuss it.) Here, peer review looks a lot like an extremely thorough version of the developmental editing typically done by acquisitions editors and series editors, plus the content vetting performed by external reviewers. Kairos does not offer a design template, so authors develop their concept or prototype through multiple rounds of review performed by varying subsets of the journal’s large editorial board. These reviews are completely open, partly because it is logistically difficult to scrub author identities from digital files and partly because scholars in this field are already committed to collaborative editing. The large number of reviewers ensures that scholarly content, technical aspects, and design all get thoroughly evaluated. Kairos derives its success from a unique combination of the technical, scholarly, and social infrastructures necessary for generating sustainable digital scholarship in the humanities (see Cheryl’s 2015 essay with Doug in Rhetoric and the Digital Humanities for more on this topic).
The experiences of the four presses I’ve described here offer other ways of building and connecting these elements. As the new platforms develop and are more widely used, the distinction between the labor of press staff and that of external reviewers may blur or shift. However, peer review in its traditional single-blind form seems robust enough to accommodate many of the new formats, though complementary forms of evaluation, from the editorial vetting of prose to the evaluation of production challenges or the input of community members engaged in collaborative projects, are also often necessary—as they always have been.