We are delighted to be kicking off Peer Review Week 2023, as in previous years, with our own thoughts on the overall theme. This year, the focus is on peer review and the future of publishing — a meaty topic, to be sure. People have been predicting the end of scholarly publishing for decades, all the more so since the advent of the internet: it has been criticized for not being innovative enough; for being increasingly dominated by a handful of big commercial players; and for being overly focused on the needs of STEM researchers, and on those working in the Global North. Despite these (legitimate) issues, scholarly publishing is still thriving — but will that continue to be the case? How will it be affected by the challenges and opportunities of artificial intelligence? By our increasing awareness of the value of different types of knowledge and outputs (Indigenous, non-English language, non-traditional outputs)? By the divisive partisan politics that are leading to book bans and other forms of censorship? By the challenges to research integrity posed by paper mills, predatory publishers, and other bad players? And more…

Bringing things back to Peer Review Week (PRW), what does all of this mean for peer review? Will it be more or less important? Will the review process need to change and, if so, how? Will human reviews be replaced, partly or fully, by machine reviews? We’ve already heard what some of our fellow Chefs think in last week’s Ask the Chefs. This week, we are going to be publishing posts representing a diverse range of subjects and perspectives on this topic: ending human-dependent peer review in order to be more equitable and sustainable; ; open source code sharing and what it means for the review process; new guidelines for ensuring that peer review is diverse, inclusive, and equitable; and (critically) what these changes could mean for publishing roles and careers in the future.

But first, here’s our own personal high-level view of what some of publishing’s current and future challenges could mean for peer review, from the perspective of a scientific publisher (Jasmine), a humanities researcher/librarian (Karin), and a scholarly communications/ infrastructure consultant (Alice). We hope you’ll also share your own views in the comments!

peer review week logo

Jasmine Wallace

Predictions regarding the future of publishing would not be complete without the inclusion of machine and human learning. While most predictions are rooted in uncertainty, we can be certain that there is no future of publishing that excludes automation driven by both systems and people who are able to learn and adapt. This is especially true for STEM publishers, who are striving to open up their science.

As publishers are attempting to integrate guiding principles to ensure the core values of open science are upheld and looking for indicators that they’re successful along their journey, there should also be a major focus on their workforce. And not just a focus on the content creators such as the authors, reviewers, and editors, but also on the staff expected to operationalize their strategic objectives.

The publishing industry has not been excluded from the larger issues at play in society, therefore, we must evolve as an industry. Burnout, quiet quitting, and job cuffing are all very real publishing workplace trends. And while some trends come and go with very little impact, these are not just passing fads but part of a larger movement that occurred as a result of the pandemic. Needless to say, no one is going back to water cooler conversations. Prioritizing life-work balance is the new normal for many, which does not necessarily include upskilling as part of professional development. And while there is a need to pull in new skills with limited budgets, publishers should carefully navigate quiet hiring. Currently, our infrastructures, which are built primarily on people-driven workflows, are not equipped to handle the repercussions of threatened, burned-out, career-cushioning staff. While we are designing tools and systems to support the work, we must also keep the human element at the forefront, augmenting our teams’ skills within an environment where they feel appreciated and valued.

Taking a brief look back, there was a hefty Ask the Chefs conversation on What is the value of a Master’s in Publishing? It was written well before 2023, but I think this is the season where its value comes into play. Publishing has never been an industry built on mastery. Most careers in our industry develop by way of apprenticeship-style learning and opportunities. Once in positions, we learn and develop within the context of our work. This places our workforce in an interesting predicament as we integrate advanced tools that require a more formalized and unified knowledge base. Without the skillsets, it will be challenging to navigate the tools emerging that will help us to improve peer review. Therefore, a focus on ensuring we have creative adaptors leading the charge toward human and A.I. integration in  peer review is essential for its future. Collaboration amongst publishers, sharing learned lessons (even failures or mistakes), and hearing from not just diverse voices and perspectives (insert this link), but also asking unlikely leaders questions they are not normally asked will get us closer to better answers.

Alice Meadows

In last year’s PRW kickoff post, we noted that: “Research doesn’t just mean medical and scientific research — it encompasses all disciplines and geographies, and it includes both academic and commercial research. And peer review isn’t confined to just research journal articles — it is conducted in a wide range of settings, both related to publishing (books, data sets, audio-visual, and more) and to other phases of the research process (grant applications, promotion and tenure reviews, etc.). To take account of this, peer review is conducted in many different ways and for many different purposes, all intended to improve trust in the research being reviewed.”

We may not know all the ways that publishing will change in the future, but we do know that,  in order for it to continue to play this important role in improving trust in research, peer review will inevitably have to continue to evolve too. This is already happening, as we will be hearing in posts later this week. Getting the right balance between human and machine review will be critical, as will ensuring that we have a robust and widely accessible infrastructure in place to help support a rigorous and constructive peer review process.

For example, tools like iThenticate, which powers Crossref’s Similarity Check service, while far from perfect, have played an important role in helping detect plagiarism — but are they still fit (enough) for purpose in a world of increasingly sophisticated fraud? Image manipulation is another area where technology can help, however, having attended Dr Elisabeth Bik’s 2023 SSP keynote about her work on this, it’s hard to imagine the kind of incredibly painstaking manual work that she currently undertakes being done as effectively by machines any time soon.

More optimistically, ORCID’s recent announcement about their soon-to-be-launched pilot to summarize provenance information in ORCID records is a much-needed step in the right direction. Following discussions with publishers, vendors, and researchers ORCID has developed an initial record summary prototype, which aims to make it easier for editors (and others) to find and understand the information within an ORCID record that could help them make decisions about its trustworthiness. From a peer review perspective this could include, for example, verifying author and reviewer affiliations, past publications, etc. This initiative is a great example of how the existing research infrastructure is evolving to meet changing publishing needs. Of course, to be successful, the whole community needs to get behind ORCID’s efforts: researchers need to give their organizations permission to update and verify their ORCID record; and funders, institutions, publishers, and others need to act on that permission!

Ensuring that peer reviewers know which tools are available to them — as well as which are being used to help identify fraud, and at what point in the process — will be critical in helping them to understand their own responsibilities. And, for those situations where peer review fails, for whatever reason, both Crossref’s recent acquisition of the Retraction Watch database and the work of NISO’s CREC (Communication of Retractions, Removals, and Expressions of Concern) Working Group are great examples of how a robust infrastructure will help minimize the fallout.

Karin Wulf

Peer review provides an essential function; I don’t think anyone disagrees that both the engagement of specialists with emerging research and the evaluation the process provides is invaluable – if not perfect. How it can continue to function in the face of a combination of new and accumulating forces is the question. And it’s a question that, for humanity and not just the humanities, is urgent.

Several of the Chefs in last weeks Ask the Chefs feature noted that the convergence of fewer reviewers for an ever-accelerating pace of research is an unsustainable dynamic. As STEM research and especially biomedical research continues to increase in volume, the pressure on the review system is acute. But for the humanities, the situation is different and more dangerous. The attacks on higher education, including the decades-long defunding of public higher education in the US and the UK, are spearheaded by attacks on the humanities. Readers of this blog will be familiar with what’s happening, from absurdist suggestions that humanities majors pay more in tuition to the gutting of humanities departments – all on the basis of the assumption that higher education is an individual rather than a social good, and should be measured in income earned. And all despite evidence that humanities majors have comparable income and that businesses and STEM both require humanists to function, or at least to function better. In fact, the very idea that a crisis of the humanities can be isolated to these disciplines is itself a crisis.

What this all means is that there are — and will continue to be — ever fewer humanists placed to produce scholarship, never mind ever fewer to review it. Humanists have always found ways to generate intellectual exchange and community that are not reflected in the structure of peer review as it currently exists and is recognized by STEM fields. Humanities peer review is often an intensive, deeply textual response to research that can’t be captured or easily managed within the kinds of systems that large STEM publishers use. And of course many humanities fields are book-focused disciplines, meaning that review of book manuscripts rather than articles or data is a norm. The time investment is considerable, which is why we worry when positions that have a research expectation and support for research are being gutted. It is simply not reasonable to expect that people can, in the main, continue to produce scholarship and contribute to scholarly publishing without a secure position.

I’m enough of a historian of the longue duree to know that we don’t need to insist on sustaining every institution we’ve built, or everything we’ve been accustomed to.  And enough of a reader across disciplines and fields in and out of the academy to know that higher education is far from the only place where new research and new knowledge is created and shared. But it remains the case that universities have been extraordinary engines for humanities research in ways that have benefitted us all. Yet we continue to not only underinvest but defund this very essential-for-humanity undertaking. An excellent and timely essay from Asheesh Kapur Siddique in Inside Higher Education asks the reasonable question, “Does Humanities Research Still Matter?”  and of course answers yes. It does, but how it will be supported, never mind produced and then reviewed, is the core question.  My fervent hope is that my STEM colleagues understand the existential crisis that we face, and its interrelationship to crises of democratic governance, income inequality, and reduced social mobility.  Peer review is not the proverbial canary in the coal mine – it is the boiled frog, the meme dog sipping tea while the house is fully aflame.

Conclusion

After a couple of discussions about what we wanted to tackle in this post we each wrote our own sections — yet individually, and without consultation, we found a common thread. Looking at them now woven together, we realize that — each in our own way — we have all chosen to focus on what Karin terms the humanity of peer review: the importance of people in the process of creating, reviewing, and publishing research. While systems, tools, and standards can provide helpful (and often essential) support, it’s the contributions of individuals that will be most critical to the future of both peer review and publishing.

Alice Meadows

Alice Meadows

I am a Co-Founder of the MoreBrains Cooperative, a scholarly communications consultancy with a focus on open research and research infrastructure. I have many years experience of both scholarly publishing (including at Blackwell Publishing and Wiley) and research infrastructure (at ORCID and, most recently, NISO, where I was Director of Community Engagement). I’m actively involved in the information community, and served as SSP President in 2021-22. I was honored to receive the SSP Distinguished Service Award in 2018, the ALPSP Award for Contribution to Scholarly Publishing in 2016, and the ISMTE Recognition Award in 2013. I’m passionate about improving trust in scholarly communications, and about addressing inequities in our community (and beyond!). Note: The opinions expressed here are my own

Jasmine Wallace

Jasmine Wallace

Jasmine Wallace is the Senior Production Manager at the Public Library of Science (PLOS). She is responsible for the production processes and day to day production and publication operations for the PLOS journal portfolio. Previously, she was the Peer Review Manager at the American Society for Microbiology (ASM). She was responsible for ensuring peer review practices, workflow, processes, and policies were up-to-date and applied consistently across the entire portfolio of journals. She currently serves as Treasurer for the Council of Science Editors and is the creator and host of their podcast series S.P.E.A.K. In the past, she was a Teaching Assistant at George Washington University for a course on Editing for Books, Journals, and E-Products.

Karin Wulf

Karin Wulf

Karin Wulf is the Beatrice and Julio Mario Santo Domingo Director and Librarian at the John Carter Brown Library and Professor of History, Brown University. She is a historian with a research specialty in family, gender and politics in eighteenth-century British America and has experience in non-profit humanities publishing.

Discussion

1 Thought on "Reflections on Peer Review and the Humane Future of Publishing"

Everyone’s perspective and comments are so wonderful and crucial: peer review is indispensable in scientific research, especially needs the evaluation of human wisdom under new technological challenges; but peer review needs the recognition of human value!

Comments are closed.