Editor’s Note: Today’s post is by Jessica Polka, Executive Director of ASAPbio, a researcher-driven nonprofit working to promote innovation and transparency in life sciences publishing. Jessica leads initiatives related to peer review and oversees the organization’s general administrative and strategic needs.
The design of critical infrastructure determines what its users can do, and when. For example, the New York City subway system carries 1.7 billion passengers annually, shapes centers of residential and commercial activity, and enables a vibrant culture with its late night service.
Incredibly, it does this with a signaling system that predates World War II that forces trains to be spaced far apart from one another, limiting capacity and causing delays. Upgrading the signaling system is necessary to meet current demands, but it is estimated to cost tens of billions of dollars and would require closing stations on nights and weekends, harming New Yorkers who depend on these services. Thus, the radical (but ultimately necessary) upgrade has been delayed in favor of putting out more urgent fires, for example track damage caused by hurricane Sandy.
Similarly, journal management systems and publishing platforms act as essential infrastructure for scholarly communication. While more nimble than a metropolitan transport network, they nevertheless face challenges in balancing needs for both urgent fixes and aspirational developments. Over the long term, their supported features can shape the nature of scholarly communication, restricting or inspiring innovation.
Peer review innovation
Interest is mounting in modernizing peer review. In just the last year, a variety of new platforms and initiatives have launched: BioMed Central’s In Review, a Wiley, ScholarOne, and Publons collaboration, and independent peer review services linked from both Europe PMC (see the “External Links” tab of these results) and bioRxiv (see the section on “Preprint discussion sites” in this example).
At a meeting organized by ASAPbio, the Howard Hughes Medical Institute, and the Wellcome Trust in early 2018, a group of approximately 90 researchers, funders, editors, and publishers discussed the merits of making peer review more transparent. Attendees expressed interest in three major areas: publishing peer reviews, credit for co-reviewers, and peer review portability. They also expressed concerns about the technological feasibility of implementing them.
To better understand the state of the art in technological support these innovations, we’ve collected information from 18 journal management and publishing platforms about support for the suggested changes above, as well as other miscellaneous changes to peer review workflow. You can view all responses in this table.
The responses cover four major areas, explored in detail below.
Publishing peer reviews
Most researchers and editors agree that peer review is essential to knowledge creation, and many see it as a form of scholarship in its own right. Given its importance and potential value in helping readers interpret the context of a manuscript and the rigors of the editorial process, attendees at the meeting agreed that it makes little sense to keep private the content of reviews — with or without referee names. Meeting organizers summarized our thinking in an article accompanying the release of an open letter, now signed by more than 300 journals expressing their commitment to making it possible to publish the contents of their peer review process. While many signatories have been publishing this content for over a decade, the last year has seen the launch of new collaborations (for example, between Wiley, ScholarOne, and Publons) and platforms (such as In Review) that enable more dynamic and immediate display of this content.
However, having the intention to publish review reports is not enough. Journals need to consider whether their content management system will support not only technical needs (ideally, the best practices worked out at a recent workshop, which includes treating peer review reports as standalone objects) but also the necessary workflow changes (which can include obtaining additional permissions from reviewers, or enabling editors to remove potentially sensitive information before publishing a review).
Development status: All respondents to our survey of tools included some information about the publication of peer review reports, with the majority of applicable platforms already supporting this and the remainder noting that features are under development. Of course, the devil is in the details, especially for journals that rely on interoperation of multiple different tools in a publishing chain. Future efforts could provide more granular information on the combinations of tools for which support is provided, and the details on how reports can be made available.
Credit for co-reviewers
COPE authorship guidelines dictate that manuscript co-authors who have made significant intellectual contributions to the project be recognized for their work. Strangely, a similar norm does not exist when the scholarly output is a review, rather than the manuscript itself. For example, a survey by eLife found that many early-career researchers are contributing to peer review, yet preliminary data from the TRANSPOSE project indicates that few journals provide a way to acknowledge the contributions of co-reviewers (individuals, often grad students or postdocs, who help an invited referee complete their report).
Giving proper credit to co-reviewers benefits them directly: they can feel more confident sharing their experience on CVs, job applications, or even green card applications in the US. Surfacing their contributions is also helpful to editors since it can broaden the pool of peer reviewers. There are several questions that need to be addressed in order to make this feasible:
- Can a journal be made aware of their contributions, or does a restrictive confidentiality policy discourage reviewers from disclosing this kind of assistance?
- Is there a dedicated space in the review form where their names can be easily added?
- Can these individuals receive credit for their contributions via ORCID or Publons?
- Finally, can the co-reviewers be easily integrated into the reviewer database?
Development status: Facilities to help identify and credit co-reviewers are not well-developed; this area had more blank fields and short responses than those pertaining to publishing reports. Perhaps this is because co-reviewing is a seldom-discussed topic, at least compared to some others on this list. As awareness grows, demand for this support may increase.
Peer review portability
Researchers and editors spend an estimated 15 million hours annually conducting and managing peer review for papers that are rejected. In an ideal world, this peer review could be transferred to other journals in order to help authors publish their work in a suitable journal.
As discussed previously in The Scholarly Kitchen, MECA, the Manuscript Exchange Common Approach, is being developed as a standard way for journals to transfer papers to one another. MECA defines a container format that will encapsulate all elements of a manuscript, its metadata, and associated process files. While the standard is still under development, eLife has deployed a MECA-complaint module that represents its first live use. However, to enable MECA’s primary purpose of submission transfer between publishers, more journals need to be able to implement it. This will require manuscript transfer systems to export and import material shared in this format.
Development status: Excitingly, several journal management systems are already developing (or plan to develop) MECA compliance, which is impressive given that no specification appears to have yet been publicly released. The completion of NISO’s work on the project could increase this number further.
Other workflow innovations
Ideally, journals could choose from a large menu of peer review implementation options. For example, will the review be single blind or double blind? Can reviewers interact with one another and possibly also authors through a forum? Could the peer review workflow be customized to more efficiently engage highly specialized technical reviewers? While the will to experiment must come from editors (we catalog such trials at ReimagineReview), it must be enabled by a flexible technological platform.
Development status: Given the open-ended nature of this question, responses were mixed, but several platforms discussed different blinding options and discussion modules.
Service providers are encouraged to contact firstname.lastname@example.org to add or update information about your project.
Vendors are likely in good communication with their clients already. But perhaps editors could influence the prioritization of development roadmaps by making known their longer-term goals in addition to their immediate needs. By doing so, we can ensure that the tooling for communicating science matches the needs and philosophies of communities of editors and researchers.
Thanks to Judy Luther and Naomi Penfold for helpful discussions.