September is peer review month, with both the Ninth International Congress on Peer Review and Scientific Publication (Chicago, September 8-10) and this year’s Peer Review Week (September 19-23) taking place in the next couple of weeks. You’ll be seeing lots of peer review related content on The Scholarly Kitchen too — starting now! Today sees the launch of the Peer Review Workbench (PRW), a new initiative from Elsevier, led by Bahar Mehmani, Reviewer Experience Lead in their Global STM journals team. Bahar kindly agreed to answer my questions about the PRW, which promises to be a valuable new tool for the growing field of research on research (aka meta research).
Please can you tell us a bit about yourself — what you do and how you got there?
I oversee the features in the submission systems Elsevier journals use to orchestrate the peer review process. Data analysis, innovation, and thinking outside the box are an important part of my role. I try to see what could be done differently to address reviewer pain points such as lack of engagement, proper instructions, and recognition. Before joining Elsevier, I was a researcher. I received my PhD in Theoretical Physics from the University of Amsterdam. My experience of the journal peer review process during my PhD wasn’t very inclusive, engaging, or helpful. I therefore created a journal club for my junior colleagues; full professors were allowed to join but not comment unless they were invited to. The idea was to boost the self-confidence of PhDs and postdocs in criticizing published papers. In 2011, I moved to the Max Planck Institute for the Science of Light to pursue my academic career as a postdoc. During that time, I started to peer review manuscripts, but without any formal training it was like walking in the dark. I had the same experience when I started to write my first paper, but at least then I could share my manuscript with my supervisor and discuss it with my fellow journal-club members. Peer review was a different story. The confidentiality of the process and the lack of editor and author feedback after submitting my recommendations made me think further about reviewer engagement and recognition. With these thoughts in mind, I joined Elsevier in 2013 as a managing editor with the goal of improving the peer review process.
What is the new tool you’re about to launch, the Peer Review Workbench, and how did it come about?
Peer Review Workbench (PRW) is a platform for providing responsible access to interested academics and researchers to the enhanced and enriched Elsevier journal and manuscript metadata inside our ICSR Lab, which is dedicated to conducting evidence-based research on peer review. Researchers and academics can apply to access metadata for manuscripts in Elsevier journals in order to run systematic analyses on peer review processes in different disciplines at scale.
The PRW data includes metadata from all Elsevier proprietary journal manuscripts that were processed between January 2018 and May 2021 (this will be updated annually). It includes:
- Submission data such as submission and editorial decision dates, number of co-authors, author position in the co-author list, keywords, etc.
- Peer review data such as reviewer invitation acceptance and completion date, as well as reviewer recommendations
- Editorial decision data such as decision date, and editor role
- Author, reviewer, editor, country of affiliation, and inferred gender using Namsor (recognizing its limitations)
- Journal subject area and impact factor quartile
While peer review is a fundamental element of academic research, it has not yet been investigated extensively as a scientific research topic in and of itself, mainly due to lack of access to peer review data. We therefore see it as our responsibility to work jointly with the academic community to change this, and to respond to researchers’ calls for such data to enable them to study peer review as a research topic). The PRW aims to help address the pressing need for further transparency in, and evidence-based studies on, the journal editorial and peer review process, in pursuit of continuous improvement for research, science, and society. Our experience with the PEERE project shows that sharing information on all aspects of the peer review process can drive progress in publishing, while protecting — and championing — the interests of authors, reviewers, editors, and researchers more broadly (see for example our paper on the gender gap in submission and peer review during the first wave of the COVID-19 pandemic).
Who and what was involved in developing the PRW?
Responsibly sharing manuscript metadata at scale requires knowledge, experience, infrastructure, and a well-designed workflow. Through our experience with the PEERE project, and based on the advice and knowledge of that group, we learned about data harmonization and enrichment at scale. With the launch of ICSR Lab it became possible to host the data in such a way that every interested researcher can access the relevant data for further analysis. A shout out to my colleagues at Aries, ICSR, and the Elsevier data science research content team for all their work on this.
What barriers did you face and how did you overcome them?
Making this multi-stakeholder project a priority among all the pressing items on the to-do-list for every team involved was not an easy task. But luckily everyone could see the value and was kind enough to reshuffle their priorities and extend their bandwidth to make the PRW a reality.
One of our main concerns was to enable data sharing while also respecting data privacy. We have taken precautions to protect the identity of authors, reviewers, and editors. Identifiers are removed from the dataset, and data is anonymized at both the journal and individual level; by introducing additional processing, we do our best to minimize the risk of an individual person’s identification. For example, we removed outliers from the dataset before sharing and slicing and dicing it for each accepted proposal so that researchers use only the relevant items.
To enhance transparency on the proposal approval process, we have formed a Peer Review Workbench Academic Advisory Board. They will review proposals for their scope and data relevancy, ethical considerations, sample size calculations and statistical analysis plan, best practices, and the value of the question being posed to science and society. All applicants must agree to our data protection terms to minimize the risk of de-anonymization. You will find details of the Advisory Board and the proposal protocol submission on the Peer Review Workbench website.
Peer review studies is a very niche field, so we know that a first come/first served approach is not going to help with supporting DEIA or early career progress in these topics. Working with the Academic Advisory Board, therefore, we are trying to ensure diverse, equitable, and inclusive access to the database during the process of proposal review. We will keep the initial submission of proposals open until early January 2023 and then evaluate the proposals, encouraging applicants with similar concepts to collaborate and join forces. We will monitor our success in creating new collaborations and reopen the call for proposals in September next year. We highly recommend researchers to consider inclusivity and diversity as well as Sex and Gender Equity in Research guidelines in submitted research proposals. Furthermore, we recommend that teams aim for equal representation of genders and the inclusion of early career researchers in their projects. Authorship questions are best addressed at the start of the project and we recommend using authorship agreement templates.
Were there any data protection issues you had to address?
Privacy protection: we ensure that the data is anonymized and made available in aggregate only after a peer review research proposal is reviewed and accepted by the Advisory Board. We will also encourage applicants who work with the data to flag any data privacy risking item they encounter in the data so we can improve our processes.
The data will be available only in the Databricks environment, and it is pre-processed for each proposal, such that each research group only receives the subset of data appropriate for their own project.
We will treat the peer review comments separately. Rather than sharing the full text, we plan to preprocess and provide signals on it. We know that we need to consider requests for this data separately and learn step by step.
Who can use the Workbench, and how will you be evaluating their applications?
The PRW is intended for scholarly research only. Anyone with an academic affiliation, coding skills, and a good question/hypothesis to test can apply. Due to the sensitive nature of the data, it is important that we understand who needs access, to what data, and the approximate timelines for access.
Applications are initially screened by the Peer Review Workbench internal team (myself, my PRW data science colleagues, and our data protection officer), to check the scope and data availability. Where necessary, for example if the topic of the proposal is beyond the expertise of our advisory board, they might need to send the proposal to external reviewers.
Our review process considers:
- Applicability and alignment of a proposal to the Peer Review Workbench research themes listed in our white paper
- The scope and validity of the methodology (in line with best practices, achievable with this dataset and the technology, not requiring any external datasets)
- Ensuring data privacy protection, data security, research integrity, and inclusion
- The potential of the research results to inform practice in research evaluation and policy
- Overall feasibility and quality
- Coding skill
What sorts of projects and organizations are you hoping will make use of the Workbench?
Potential studies could include topics such as the fate of rejected manuscripts, bias in peer review and the editorial decision-making process, the relationship between reviewer recommendation and editorial decision-making, the impact of review round on the success of papers, etc. You can find more information linked on the Workbench website.
How will you ensure that Workbench users and projects are as diverse, equitable, inclusive, and accessible as possible?
Firstly by ensuring all interested researchers know about this new tool; I highly appreciate your help in that. Secondly by being transparent in our proposal evaluation and acceptance procedures, as well as introducing mechanisms that are not based on first-come, first-served. We are also open to hearing suggestions from the community on how best to ensure inclusion and accessibility.
What are your hopes for the future of the Workbench?
To help make peer review studies a research topic, to promote reproducible studies, and also going beyond Elsevier data because running peer review studies at scale will require access to other publishers and societies’ manuscript metadata. I hope that the lessons learned that we shared in our white paper will help any other publishers that have plans to share this sort of data – and I’d love to hear about any similar initiatives.
Discussion
3 Thoughts on "The Peer Review Workbench: An Interview with Bahar Mehmani"
How interesting! Bahar, if you’re following these comments, are there any plans to include data from other publishers? I did not see anything about expansion in your FAQ. https://lab.icsr.net/faq.html
Dear Gabriel,
We welcome other jnls and publishers to join this initiative and I am more than happy to see how we can collaborate to make it happen.
and here is the link to our white paper: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4211833