Each year we try to begin Peer Review Week with a look at the big picture. Or at least a look at some multiple dimensions of the picture. Whether about “Identity in/ And Peer Review” for 2021, “Trust as an Ethic and Practice in Peer Review for 2020,“Quality is Multiple-Dimensional:  How Many Ways Can You Define Quaklity in Peer Review?” for 2019 and more, we’ve aimed to start the week with a 30,000 foot view and then work towards a closer look at the Peer Review Week theme through the week’s posts. We hope you’ll stay with us all week, as we highlight some of the really critical perspectives on how and why trust inheres in peer review.

This year’s theme is “Research Integrity:  Creating and Supporting Trust in Research”, which  we previewed last week in three “Ask The” features, Ask the Community, Ask the Chefs, and a special “Ask the Editor/Surgeon/ Researcher/ Author” (with big thanks to Jennifer Ragala for organizing 1 and 3!) — we recommend you take a look and add your comments if you haven’t already done so. All of the contributors have highlighted how essential peer review has been to our current research environment; all acknowledge that peer review isn’t perfect, some pointing to its real harms, and some suggesting there may be other ways to produce research with integrity. Perhaps unsurprisingly, some came back to themes of previous Peer Review Weeks that are deeply interconnected with trust and integrity, like transparency and diversity.  

Although the majority of both researchers and the general public continue to have trust in research, there are also some worrying signs that this trust may be eroding. The Pew Research Center’s latest survey on the topic found that, in the US, public trust in scientists, especially medical scientists, has declined in recent years, especially since the pandemic. In spring 2020, a vast majority of respondents had a great deal or fair amount of trust in medical scientists (89%) and scientists generally (87%); earlier this year those numbers had fallen to 78% and 77% respectively. 

And these declines in trust are increasingly partisan in the US — trust in science is now significantly lower among Republicans, dropping in that same period, 2020 to 2022, from 85% to 66% for medical scientists, and from 85% to 63% for scientists overall. It also appears that researchers themselves have some issues with trusting research. Elsevier’s 2019 Trust in Research report, based on a survey of over 3,000 researchers, found that “While 62% of researchers regard all or a majority of the research outputs they see as reliable, over a third (37%) said they only viewed half or some of them as reliable.” 

However, for many of the doubters, peer review is seen as an important way of tackling their concerns, with over half the respondents to the Elsevier survey stating that they only read/access content that is in or linked to a peer-reviewed journal (other popular options were thoroughly reviewing the data, and “seeking corroboration from another trusted source, e.g., see if research is cited in a known journal” — presumably peer-reviewed). Moreover, the report identifies four trust inhibitors, all of which are related to peer review: “Unclear if an article is peer reviewed, Not peer reviewed, Low quality peer review, and Peer review scope.”  And journalists and the media, still relied on by many as their source of news, also recognize the value of reporting on peer-reviewed scientific research. For example, US News’s guidelines note that, “A responsible health story not only delivers the facts but also puts them into broader context. As reporters and editors, we look at the peer-reviewed evidence and help readers understand the significance of the new information. We evaluate its scientific validity. In other words, we take a rigorous look at the existing research.”  So for researchers, as for journalists and, by extension, at least some of their readers, trust in research does seem to begin with trust in peer review. 

Of course, these examples are just a small subset of the whole ecosystem. Research doesn’t just mean medical and scientific research — it encompasses all disciplines and geographies, and it includes both academic and commercial research. And peer review isn’t confined to just research journal articles — it is conducted in a wide range of settings, both related to publishing (books, data sets, audio-visual, and more) and to other phases of the research process (grant applications, promotion and tenure reviews, etc.). To take account of this, peer review is conducted in many different ways and for many different purposes, all intended to improve trust in the research being reviewed. 

But is it working? Does trust in research begin with trust in peer review across the whole ecosystem? What does that look like for different communities and stakeholders? In this post, we  attempt to answer some of these questions from the perspective of our own areas of interest and expertise: Karin, as a researcher and librarian working in the humanities; Tim, as a service provider; Alice, as an infrastructure provider. We hope you’ll add your own perspectives in the comments. 

peer review week logo

Karin:

Trust is hard earned, and can be easily lost. Peer review is self-evidently a process whereby fellow experts in a field of research assess work of scholarship during the process from funding through publication and then, if you publish in a book discipline like history, into post-publication review. It is never meant to be definitive about whether the research and the interpretation of the research sources is right or wrong per se; it simply indicates that a group of experts have deemed it valuable enough to fund, or publish, or earn a position for the researcher.  

We do ourselves and our readers, whether fellow experts or media or the public, a massive disservice when we over-endow peer review with the magical properties of overcoming human failings (be they biases in mild or extreme form, like racism, or confirmation bias, or simple or major mistakes, hasty decisions, and more). It’s also a mistake to undersell the value of robust, organized expert feedback. I think we should always be talking about what each aspect of that means:  Who decides who is an expert? How is that feedback organized and conveyed? What are the consequences of that feedback? There’s a reason why previous Peer Review Weeks have emphasized transparency and diversity – the lack of both can and do undermine trust in peer review, and more of both are essential for what I think is a process with an incredibly important role in knowledge production writ large.  

In the humanities, peer review tends to be intensive and extensive, focused on sources, methods, and argument – which includes grounding in the existing scholarship. Is the piece of work, whether a Digital Humanities project, a journal article, a book manuscript, or even a book review, employing the right sources for the research question? Has the author or the team used an appropriate methodology for exploring those sources to the fullest to examine the issue? Have they made a persuasive case from the sources and method, and have they situated it in the existing scholarship enough to do two things – use that existing work to the fullest extent to support their case, and show that theirs is a contribution. Each and every aspect of this can be subjective, but the point is to do our best — and to do our best within the context of a specific expertise.

It’s a mistake to let peer review become synonymous with peer-reviewed science articles. As I remarked in another context when someone said that, “the humanities isn’t like science” well, science isn’t like science either! The caricature of objective up or down, right or wrong masks too much that is, in fact, about using a human process to get things as right as we can with what we know now. That’s what we do in the humanities and that’s what peer review in science does, too. At some level, we trust in peer review when we trust in our fellow humans.

Tim: 

What is trust in research anyway? It’s essentially the same stuff as you’d expect to develop with a new accountant: an integration over many small signals that the opposite party is a legitimate, serious, and trustworthy source. Did they show up in a clown costume? Did you see their face on the news last week in connection with a fraud case? Do they make basic errors when they talk about tax law? Fall down on one of these, or a multitude of other signals, and your trust in them is undermined. Undermine it enough, and you’ll give someone else your taxes to do. 

This notion applies to research in exactly the same way, with the article being the medium for the signals. An article is a highly complex document, an intellectual Heath Robinson/Rube Goldberg contraption that joins a verbal argument about the generality of some feature of the real world (e.g., drug X is effective against disease Y) with the collection, analysis, and interpretation of data about that feature. While there are always several valid ways to approach a research question, there are many more approaches that are suboptimal, biased, or just completely wrong. Perhaps fortunately for the reader, articles that are based on flawed approaches are often full of other signals that the researchers are not up to the task of performing the research correctly. For example, the introduction may omit crucial references to relevant papers, or just parrot standard phrasing without making any intellectual advance. Correctly marrying experimental design with statistical analysis is very hard, especially for real world data, and almost all inexperienced researchers will fail to signal competence here. More broadly, being able to anticipate reader criticisms and address them thoroughly in subsequent sentences is a powerful signal that the authors really understand the strengths and weaknesses of their work. 

So what has all this got to do with peer review? Putting your trust in a preprint is like hiring a new accountant from Craigslist. They might be excellent, but you’ll be well served by persistently looking for signals that they’re not. A peer-reviewed article in an established society journal is like taking on an accountant recommended by a friend. Your friend is telling you that they’ve searched for the tell-tale signals of a huckster and found nothing, and they’re willing to stake their reputation that their favorite accountant will do a good job for you too. 

Alice:

Infrastructure plays a critical role in building trust. The old adage that no one cares about infrastructure until it fails (think plumbing, transportation, or internet access, for example) is every bit as true of the research infrastructure (think error messages when submitting a manuscript or a citation link taking you to the wrong article). If trust in research begins with trust in peer review, it’s clear that the infrastructure supporting peer review must also be trustworthy. So, what does that actually look like?

It’s no secret that transparency generally increases trust, and a fully open research infrastructure facilitates this transparency, especially if it’s accompanied by a more open and transparent approach to peer review in general. Note, this doesn’t have to mean fully open peer review, but could include things like open (signed or unsigned) review reports, more transparency around reviewer selection and other parts of the review process, and so on. A robust and well-supported research infrastructure also increases efficiencies in the process. Transparency in Peer Review was the theme of Peer Review Week 2017, and Irene Hames (independent peer review and publication ethics expert), Elizabeth Moylan (then at BioMed Central), Andrew Preston (then at Publons), and Carly Strasser (then at the Gordon & Betty Moore Foundation) shared their thoughts on the topic in this post. As Andrew noted (highlight mine), “transparency is about making sure that we are able to reveal the information necessary to bring a level of trust and efficiency to the peer review process. It doesn’t necessarily require open review (although that would be nice), but it does mean coordinating across the publisher divide in order to give the community an understanding of who is shouldering the workload, what they have on their plates right now, and exposing information about the expertise of reviewers and the quality of the review process.” 

Open identifiers, like Crossref DOIs for review reports, ORCID identifiers for reviewers, and Crossref Funder Registry DOIs for funders (which will be migrating to ROR in future*) are critical to this. They don’t just enable identification and disambiguation, they also include (or at least have the capacity to include) invaluable metadata including provenance information. All of this can be shared within and between different research systems — from grant application through manuscript submission to content platforms and discovery systems.

Standards are another important element of the research infrastructure — and another way to improve both transparency and interoperability (aka efficiency). My own organization, NISO, is currently working on formalizing the peer review taxonomy standardization originally developed by STM in 2019. One of the goals is “harmonizing and better communicating definitions of discrete elements of these processes, so that members of the community — whether they be authors, reviewers, editors or readers — can quickly and easily recognize how to more productively participate in the creation and qualification of scholarly content.” Using clear and consistent terminology across publications, publishers, and platforms makes the process smoother, more transparent, and ultimately more inclusive and equitable — better for everyone!

*Watch for more on this from Crossref and ROR in the coming months

Karin Wulf

Karin Wulf

Karin Wulf is the Beatrice and Julio Mario Santo Domingo Director and Librarian at the John Carter Brown Library and Professor of History, Brown University. She is a historian with a research specialty in family, gender and politics in eighteenth-century British America and has experience in non-profit humanities publishing.

Alice Meadows

Alice Meadows

I am a Co-Founder of the MoreBrains Cooperative, a scholarly communications consultancy with a focus on open research and research infrastructure. I have many years experience of both scholarly publishing (including at Blackwell Publishing and Wiley) and research infrastructure (at ORCID and, most recently, NISO, where I was Director of Community Engagement). I’m actively involved in the information community, and served as SSP President in 2021-22. I was honored to receive the SSP Distinguished Service Award in 2018, the ALPSP Award for Contribution to Scholarly Publishing in 2016, and the ISMTE Recognition Award in 2013. I’m passionate about improving trust in scholarly communications, and about addressing inequities in our community (and beyond!). Note: The opinions expressed here are my own

Tim Vines

Tim Vines

Tim Vines is the Founder and Project Lead on DataSeer, an AI-based tool that helps authors, journals and other stakeholders with sharing research data. He's also a consultant with Origin Editorial, where he advises journals and publishers on peer review. Prior to that he founded Axios Review, an independent peer review company that helped authors find journals that wanted their paper. He was the Managing Editor for the journal Molecular Ecology for eight years, where he led their adoption of data sharing and numerous other initiatives. He has also published research papers on peer review, data sharing, and reproducibility (including one that was covered by Vanity Fair). He has a PhD in evolutionary ecology from the University of Edinburgh and now lives in Vancouver, Canada.

Discussion

2 Thoughts on "Does Trust in Research Begin with Trust in Peer Review?"

@Tim says that “A peer-reviewed article in an established society journal is like taking on an accountant recommended by a friend (…) willing to stake their reputation that their favorite accountant will do a good job for you too”. But if the peer review is blind, then this friend is not really risking their reputation at all, are they? (well, I guess with the journal perhaps, but what do journals really do with peer reviewers who have done a bad job, or haven’t been able to spot some mistakes? I guess nothing?). Open, non-anonymous peer review would be more trustworthy in that sense – there’s more reputation risk for the reviewers – although I understand there are issues with those, too…

Hi Maruxa, thanks for the comment. I was picturing the journal as being the friend, and maybe the Editor if they’re named on the article. I definitely agree that sharing the peer review history is a big source of trust.

Comments are closed.