In the Spring of 2020, The STM Association released a draft of taxonomy for peer review aimed at standardizing definitions and terminology. The draft was an output of an ongoing working group led by Joris van Rossum. Back in October, The Scholarly Kitchen ran a guest post from Micah Altman and Philip Cohen, who commented on the draft and articulated why they thought a shared taxonomy is important. In their view, a shared taxonomy is necessary to build an evidence base for strategic investments and policy decisions, as well as to improve journal transparency and evaluation processes. Altman and Cohen go on to express concerns about what they see as the limited scope of the taxonomy and a lack of measurement of certain aspects of the peer-review process.

Inspired by that post, I sat down with Joris and Lois Jones, who is a peer review manager at the American Psychological Association (APA) to talk about why both organizations decided a taxonomy was needed, whether the taxonomy is fit for purpose, and what, if anything, about peer-review needs to change.

Before we get to the main interview, though I first want to give a little more context. A while back, I wrote a post about Hindawi’s decision to part company with STM where I discussed STM’s position as a neutral platform rather than an agent for change. More recently, however, STM have taken a more active role in developing research infrastructure, a good example being the Research Data Year, which I reported on previously. I started by asking Joris about whether the taxonomy was part of a broader shift in STM’s approach and attitude to modernization.

Window Cleaner washing a window with a squeegee.

Joris, STM appointed Ian Moss as CEO a year ago. Also fairly recently, the organization has been involved in more aspects of what you might call research infrastructure. I’m thinking of the Research Data Year and the Shared Taxonomy working group. It looks like the STM association is taking a slightly new direction, how true is that?

Joris: In our efforts for Open Science, STM is focusing on improving Research Integrity, reproducibility and transparency. Throughout 2021, we will be addressing a variety of topics alongside research data and review transparency. These include duplicate submissions, image manipulation detection, and the ethics and desirability of the use of artificial intelligence within scholarly publishing. Our aim is to advance trusted research collectively as scholarly publishers, working closely and fostering collaboration with other players within the ecosystem, such as funders, institutions, and librarians.

You’ve been leading STM’s working group on Peer Review Taxonomy. Could you tell us a little bit about why the working group was set up? Why do we need a taxonomy? 

Joris: There are several reasons why STM started this initiative. The last few decades have seen the emergence of new review models which are loosely labelled as ‘open peer review’. But the creation of clear definitions has lagged behind, with the result being that open review means different things to different people. For example, it can refer to the model where the identities of authors, reviewers and editors are open during the review process; the publication of review reports and identities alongside the article; or the ability to comment on the article post-publication. So one reason for launching this initiative is to ensure we have a shared and consistent language. The working group, consisting of representatives from eight publishing organizations, created a terminology describing the process of four elements: identify transparency and who interacts with who during the process; what information about the process is published, and whether post-publication review (which we relabeled ‘post publication commenting’) is enabled. With this taxonomy we hope to cover the vast majority of models being used, both traditional and innovative.

Peer review is the backbone of scholarly and academic publishing, which calls for the need for more transparency about the process, another reason for the launch of this initiative. With the variety of models being used today, publishers should not just communicate whether an article was reviewed, but also how it was reviewed. Similarly, we need to communicate to authors what they can expect in terms of review when submitting a paper and what information about the process will be published alongside the article. The taxonomy also contains a recommendation on how information about the review model utilized should be communicated to authors and readers in a consistent way.

I understand you’re now getting close to a milestone in the project. Are you launching a pilot soon?

Joris: We organized a public consultation phase in the summer of this year, in which received valuable feedback from a wide range of stakeholders which we processed in a new version which can be found on OSF. We will start a pilot in January 2021, where publishers (including Cambridge University Press, Taylor & Francis, IEEE, APA, Elsevier, eLife, and MIT Press) implement the taxonomy on the journal and article level for a selection of their journals. Journal level means that information is given about which review models are used for each specific journal. This can be through the Guide for Authors, Guide for Reviewers, or Journal Homepage, for example. Article level means that information about the peer review process is collected via the submission systems and published on the article page. The pilot will test both the comprehensiveness of the taxonomy and the technical and operational requirements.

In a recent post in the Kitchen, the taxonomy was criticized for being limited in scope and missing elements relevant to potential use cases. There are two parts to this question, first, do you agree with the author’s views about why a taxonomy is necessary and if so, can you talk about whether you think there are components missing that are necessary to meet those goals?

Joris: Yes, I think the authors outline the rationale for the taxonomy well. In fact, throughout this project, we experienced a broad consensus in the scholarly ecosystem for the need for this initiative. We also agree that the current scope is limited, but this was partly done on purpose as a relatively light taxonomy will make it easier to implement. Once the taxonomy is established, we can expand it to other areas (which of course will be done in close consultation with publishers and other stakeholders, naturally including researchers themselves). On the other hand, there will still be elements of the process that might be less suitable for inclusion. For example, the authors mention ‘reviewer selection’. Although we agree that this is an important element of the process, we feel it might be challenging to categorize the different forms of reviewer selection and ask editors to report on it. Who selects (or recommends reviewers), and what expertise they bring to the review might differ from manuscript to manuscript, or may not be easily defined in a way everyone can agree on. This might apply to other elements as well. Moreover, in the implementation of the taxonomy, we have to consider the capability of electronic submission systems to collect the relevant data and pass them on to production systems into account.

Lois, you work at the American Psychological Association. As a Peer-review manager at a publisher, why did you choose to get involved in the taxonomy working group?

Lois: Within our own organization, we’ve discovered that editors, authors, reviewers, and even journal staff approach the same terms with different definitions. The peer review process is anxiety-inducing for authors and can often be inscrutable to those involved. APA believes very strongly in providing transparency where we can as a way to increase equity and fairness in the process. An agreed-upon taxonomy is a concrete way to provide more clarity and transparency, which could help familiarize non-traditional or newer authors and reviewers with our processes. 

This question is for both of you. Many journal editors view their editorial workflows as an important part of their journal’s identity. To what extent do standardization efforts like this risk making things too homogenized or stifling individual journal identity?

Joris: This is a justified concern, but please note that the project aims to standardize the terminology and level of transparency of communication applied review models, not the review models themselves!

Lois: Yes, our goal is to help publishers more effectively catalog what is already happening rather than force editors into a box. If new editors are more aware of the options, they might pursue the alternatives versus continuing what the journal has been doing during the past editor’s term out of the sake of tradition. There are different workflows that can happen outside of our taxonomy, but the important thing is that the authors should be aware of the process. 

So, your view is that creating a taxonomy doesn’t necessarily imply a value judgement over whether one workflow is better or has more integrity than another.

Lois: Correct! What’s best for one journal, publisher, or field is not always what’s best for another. There are a lot of considerations that go into a peer review workflow, so the hope is that we’re providing clarity and an easier way for people to talk about the process, not recommendations for how journals should function. What workflow works best for a journal is a separate consideration.

Peer-review week wasn’t that long ago. Here at the Kitchen, we’ve asked the Chefs about trust in peer-review and what the future of quality assurance in scholarly publishing might look like. Do you think that peer-review needs to be fixed in some way? Will it evolve perhaps or even be replaced by something new?

Lois: Peer review is integral to science, but could be at risk if publishers or reviewers’ institutions take the process for granted. I’m always looking for ways to recognize reviewers through different services, but APA also works to support reviewers in a number of ways. We try to ensure appropriate timelines for reviews, send sufficient reminders ahead of time and clear instructions, timely responses from editorial offices, and express basic empathy for when things go wrong. This has been especially important this year, given the immense demands upon everyone’s time, both personally and professionally. Reviewers are first and foremost individuals who are donating their time to improve science, so I think it’s important to continue listening to what they think needs to be fixed and where we need to go.  

Joris: Personally, I don’t believe a major overhaul is required. We should keep in mind that surveys show that most researchers, despite seeing areas for improvement, still overall have trust in the system. I hope that with our project we manage to solve a major challenge – the lack of transparency – leading to an increase of the levels of trust in the system. More generally, all players in the scholarly ecosystem (including funders and institutions, as well as publishers) collectively and in collaboration must ensure peer review is fully recognized as a vital scholarly activity. Luckily in the last couple of years, positive steps in that direction have been made. 

 

More information about the STM Working Group on Peer Review can be found on the association’s website here, along with Joris’ contact details for inquiries.

Phill Jones

Phill Jones

Phill Jones is a co-founder of MoreBrains Consulting Cooperative. MoreBrains works in open science, research infrastructure and publishing. As part of the MoreBrains team, Phill supports a diverse range of clients from funders to communities of practice, on a broad range of strategic and operational challenges. He's worked in a variety of senior and governance roles in editorial, outreach, scientometrics, product and technology at such places as JoVE, Digital Science, and Emerald. In a former life, he was a cross-disciplinary research scientist at the UK Atomic Energy Authority and Harvard Medical School.

Discussion

1 Thought on "Towards a Shared Peer-Review Taxonomy: An interview with Joris van Rossum and Lois Jones"

“..we have to consider the capability of electronic submission systems to collect the relevant data and pass them on to production systems into account” — yes, taking this approach would be slow, cumbersome and expensive.

Instead, consider solving the problem using a post-publication PID-enabled “checklist”. Easier to scale, flexible and 100 times less expensive. There is no reason for this type of data to be processed inside the traditional manuscript sausage machine.

Richard Wynne – Rescognito

Comments are closed.