In keeping with our tradition of asking the Chefs to share their thoughts on the Peer Review Week theme each year, ahead of the official week of posts, today’s post invites them answer to the question, “Is research integrity possible without peer review?“. It follows hot on the heels of yesterday’s Ask the Community post, in which you heard from a wide variety of Society for Scholarly Publishing (SSP) and scholarly communications colleagues about their thoughts on the same question. We hope that, together, everyone’s responses will give you plenty of food for thought as we look forward to the eighth annual Peer Review Week, starting on Monday (September 19).

peer review week logo

Rick Anderson: Research integrity isn’t possible without stringent and scrupulous article review. Is it possible to conduct such review without the current system of distributed, outsourced article review by peers? Sure – I can imagine, for example, a journal setting up a crew of in-house reviewers with deep expertise in the journal’s subject area and in research design, carefully vetting each submitted article and counseling with the editor-in-chief as to its quality, relevance, and significance. But the key word here is imagine, because while this is possible in theory it’s difficult to see how it could be implemented in practice, at scale. Scholars report that it typically takes at least several hours to review a paper. So let’s posit a journal that gets 2,000 submissions per year (38.5 per week); if effective review of each paper requires, let’s say, four hours, then that’s 154 hours of review per week – or about 4 FTE. That might actually sound reasonable (especially when desk rejection is taken into account), until you consider that a journal that receives only 2,000 submissions per year is not likely to be one that has sufficient resources to employ four full-time reviewers. Bone and Joint Journal, for example, reports that it gets about that level of submissions, and its parent organization (which publishes three other journals as well) appears to have three employees in total. Nature, on the other hand, receives about 11,000 submissions each year; Chemistry of Materials receives more than 5,000, as does the Journal of Clinical Oncology. This illustrates the crucial difference between possible and feasible. Traditional peer review has many problems and weaknesses; unfortunately, they seem to me to be inextricably bound up with the very characteristics that make it feasible: its decentralized organization, its broad distribution, and its voluntary nature.

David Smith: I’m somewhat removed these days from the hurly burly of peer review, so my thoughts should be taken in that context. But my view is that peer review isn’t there to ensure research integrity, it assumes it. If people are going to cheat, then it’s wrong to think that peer review is the trap for such behavior. To me (ex-scholar, reformed) peer review assumes that whilst the research might be completely wrong, it is submitted by peers… and thus it’s axiomatic that they be considered as good actors. Now, peer reviewers can be alert for ethics statements and the other particulars of the field of study in question, but they cannot be expected to catch research malpractice. They do not have the time, the tools, or the technical capability to do this. On top of that, I cannot imagine anyone wanting to suggest, in any way, some impropriety that might or might not be present as a result of a peer review analysis of a given paper. For the legal implications loom large here, and one does not have to look very far to see evidence of the litigious behaviours of those accused of dubious practice.

I cannot imagine that it is the right solution to put forensic image and data analysis tools into the hands of peer reviewers, because it’s the wrong place to do so. If we look at the issues of image and data manipulation, the examples I have seen (without focusing on specifics) are all frankly and bluntly the hallmarks of clear malpractice. This should get trapped and flagged BEFORE it ever gets to a peer reviewer. It seems to me that greater traceability of the elements of published research is vital, so perhaps peer reviewers can have a role to play there in checking that. But bluntly – if a paper has problematic images say, and the authors have not been able to supply a satisfactory answer in a very timely manner, then they shouldn’t get to play. Following on from that, there ought to be an exchange of data between publishers where there are problematic authors who might need greater scrutiny. And yes, that means research institutions ought to be more involved for reasons of a reputational nature. But peer reviewers should be free to focus on the research efforts of the good actors, striving to reduce uncertainty in whatever the field may be.

Haseeb Irfanullah: My short answer is “Yes.” But, before elaborating it, let me share a few thoughts on peer review itself.

Frankly speaking, sometimes I feel that the importance of peer review in the scholarly arena is overrated. As a journal editor, I know too well how challenging it is to get good peer reviewers who have the time and sincerity to read and comment on a manuscript I forward. As a researcher/contributor, I know how my papers get delayed as potential reviewers decline to review it, one after another or take a longer time than initially promised after agreeing to review it. As a reviewer, I know how demanding journals can be to capitalize on (or to exploit) my expertise and altruism for free.

We often say, despite many systemic flaws, peer review is the best quality assurance option we have. But, 1) have we ever measured what “real disasters” are happening (not the imaginary ones) because of articles published in predatory and low quality journals which don’t follow publishing standards, including the peer review process? We however continue to argue against preprints for not being peer-reviewed, and thus being potential time-bombs of misuse. 2) On one hand, we lament not getting enough good peer reviewers for manuscripts submitted to our journals. On the other hand, we believe that open review through engaging the wider community in the review process is effectively possible in a real world. 3) Despite following due peer-review process, Diamond Open Access journals without APCs from the Global South are often undermined by being portrayed as low-quality.

If we now link the peer review process to ensuring research integrity, it indicates that it is not 100%researchers’ responsibility or it is not possible for them to fully ensure research integrity. I think peer reviewers can contribute to ensuring research integrity while reviewing research proposals, which may have different detailed information, components, sections, and/or annexes. But, peer reviewers of research manuscripts can’t effectively do the same because they are dealing with completed research. If peer-reviewers report some indications of a lack of integrity, the manuscript will essentially be rejected. But since it may not be possible for the researchers to overcome those (let’s say, unintended) breaches of integrity identified by the reviewers, researchers/authors will essentially look for a new journal hoping that the new journal’s peer reviewers would not look into such ‘gaps’ so rigorously and so they would get the manuscript published anyway. In such a case, the peer review process fails to ensure research integrity due to loopholes in the publishing system. (Nevertheless, Master’s and PhD theses examiners may ensure research integrity even of a competed research project as there are not many options available to the students/researchers other than to address the formers’ observations/suggestions.)

After publishing, if we focus on the use of these apparently flawed publications, there is no universally adopted system or regulations that stop us from using these research findings by citing them in other publications or by including them to build evidence of a case to change a policy, for example. We may use available list of predatory journals, which are often controversial or are behind a paywall (or the whole idea of defining a journal predatory may itself be branded as ‘weak’). Nevertheless, flawed publications find themselves permanent resident of the knowledge ecosystem and peer review isn’t an effective deterrent.

As researchers, we use literature, both peer-reviewed and non-peer-reviewed/grey, as we feel or believe we need it in designing our research projects or writing the results to communicate. Our peer reviewers may encourage us to cite more peer reviewed literature if the peer-reviewed/non-peer-reviewed materials’ ratio is leaning too far towards non-peer-reviewed ones in a manuscript. But it is rare for reviewers to ask us about the quality of literature cited in our manuscripts.

In recent years, well-known publishers and their journals are increasingly allowing authors to cite preprints in submitted manuscripts. Many are integrating preprints into publishing workflows. The same may be true for some research funders welcoming preprints in research proposals. Are these basically not undermining the way we used to rely on the peer review process in favor instead of relying upon the authors’ judgment?

Our judgement as readers and users of research often defines what is a good piece of research: In the case of peer reviewed articles, it is largely influenced by the journals or publishers in question, and in the case of non-peer reviewed ones, by the host/funding organizations. But we are increasingly relying on authors for research integrity. We should promote scholarly and research ethics by investing more in building the capacity and understanding of those who are doing research and who are using it. We should move away from our over-reliance on peer reviewers and stop branding them as the ‘Guardians of the Scholarly Galaxy’.

Tim Vines: A knee-jerk response: yes. In the same way it’s possible to be completely honest when filling out your tax return, even when it’s very unlikely that your return will be audited.

But the question isn’t really about individual honesty, it’s about integrity across the enterprise of research as a whole. The goal is a system that makes honest research the path of least resistance compared to all the other options (data fabrication, image manipulation, etc). Making such a system is difficult because it’s much simpler to make up numbers to fit your hypothesis than it is to construct a worthwhile hypothesis that is then convincingly addressed by real world data.
And this is where peer review comes in: it requires authors to make many, many signals that they’re conducting research competently and in good faith. They need to demonstrate that they’ve read the previous literature. They need to demonstrate that they understand how to design an experiment around their hypothesis, and then demonstrate that they can collect and analyze their data without making obvious errors.
All of these things are hard and require significant expertise. Inexperienced or incompetent researchers will fail to make the right signals and will be unable to publish their work, at least in the more selective journals. Dishonest researchers will still have to go through all the motions of doing quality research, even if their data are being massaged behind the scenes (more on this in a moment). The prospect of peer review thus pushes researchers to work as if every aspect of their study will be scrutinized: even if the review process on a particular paper wasn’t rigorous, the next review process might be, particularly if it’s at a highly selective journal.
Moreover, journal peer review is an established and expected workflow, and we can add new components and checks without building significant new infrastructure. This is where peer review really can start to make honest research the ‘path of least resistance’. The key is enabling peer review of data, both by people and by machines. Data fabrication leaves all sorts of clues in the values, and once data are regularly available for scrutiny, then the fraudster’s chances of detection – with its attendant severe consequences – gets much higher.
David Crotty: I think “Is research integrity possible without peer review?” is the wrong question. Anything is possible, but asking it in this manner creates a yes/no, either/or mindset, something which plagues so many efforts to improve scholarly communication. Rather than thinking in substitutive terms, we need to think in additive terms. Does peer review aid in promoting research integrity? Absolutely. Is it enough on its own? No, definitely not. Are there other things we can do on top of peer review to further this goal? Yes, more transparency (open data, open methods, open peer review) would be greatly beneficial during the publication process, and better and faster post-publication corrective measures would help as well.
Alice Meadows: Like some of the other respondents to this question — in both this and yesterday’s Ask the Community post — my answer is essentially, maybe… Formal peer review, in the sense of one or more peers reviewing a piece of written research before publication, is just one of many ways of evaluating that research. It’s an important one, for sure. When done well, it can be the most important one. But, as others have noted, peer reviewers can’t be experts in everything they’re asked to review. As someone who occasionally reviews articles for scholarly communications journals, I am very aware of my own limitations as a reviewer! Some of us may have a deep understanding of the underlying data, but less familiarity with the other works being cited. Others may be knowledgeable about the subject area but have no idea how to tell whether an image has been manipulated. So layering onto the process additional types of review, which may or may not be labeled “peer review” in a formal sense, seems to me to be just as essential to ensuring research integrity. These could include, but aren’t limited to, preprint commenting, post-publication review (invited and public), preregistration, registered reports, open data and methodology, reproducibility requirements, and more. Likewise, as others have also noted, working to avoid bias or discrimination in reviews — for example through increased transparency —is equally vital to the integrity of the research being published.
Alice Meadows

Alice Meadows

Alice Meadows is NISO's Director of Community Engagement, responsible for engaging with and developing our member community. She was formerly Director of Communications and Director of Community Engagement at ORCID; and before that, she worked for many years in scholarly publishing, including at Wiley and at Blackwell Publishing. Alice is also a Co-Founder of the MoreBrains Cooperative, which provides consulting services to the open research/research infrastructure community.

Rick Anderson

Rick Anderson

Rick Anderson is University Librarian at Brigham Young University. He has worked previously as a bibliographer for YBP, Inc., as Head Acquisitions Librarian for the University of North Carolina, Greensboro, as Director of Resource Acquisition at the University of Nevada, Reno, and as Associate Dean for Collections & Scholarly Communication at the University of Utah.

David Smith

David Smith

David Smith is a frood who knows where his towel is, more or less. He’s also the Head of Product Solutions for The IET. Previously he has held jobs with ‘innovation’ in the title and he is a lapsed (some would say failed) scientist with a publication or two to his name.

Haseeb Irfanullah

Haseeb Irfanullah

Haseeb Irfanullah is a biologist-turned-development practitioner, and often introduces himself as a research enthusiast. Over the last two decades, Haseeb has worked for different international development organizations, academic institutions, donors, and the Government of Bangladesh in different capacities. Currently, he is an independent consultant on environment, climate change, and research systems.

Tim Vines

Tim Vines

Tim Vines is the Founder and Project Lead on DataSeer, an AI-based tool that helps authors, journals and other stakeholders with sharing research data. He's also a consultant with Origin Editorial, where he advises journals and publishers on peer review. Prior to that he founded Axios Review, an independent peer review company that helped authors find journals that wanted their paper. He was the Managing Editor for the journal Molecular Ecology for eight years, where he led their adoption of data sharing and numerous other initiatives. He has also published research papers on peer review, data sharing, and reproducibility (including one that was covered by Vanity Fair). He has a PhD in evolutionary ecology from the University of Edinburgh and now lives in Vancouver, Canada.

David Crotty

David Crotty

David Crotty is a Senior Consultant at Clarke & Esposito, a boutique management consulting firm focused on strategic issues related to professional and academic publishing and information services. Previously, David was the Editorial Director, Journals Policy for Oxford University Press. He oversaw journal policy across OUP’s journals program, drove technological innovation, and served as an information officer. David acquired and managed a suite of research society-owned journals with OUP, and before that was the Executive Editor for Cold Spring Harbor Laboratory Press, where he created and edited new science books and journals, along with serving as a journal Editor-in-Chief. He has served on the Board of Directors for the STM Association, the Society for Scholarly Publishing and CHOR, Inc., as well as The AAP-PSP Executive Council. David received his PhD in Genetics from Columbia University and did developmental neuroscience research at Caltech before moving from the bench to publishing.

Discussion

4 Thoughts on "Ask the Chefs: Is Research Integrity Possible without Peer Review?"

There’s a big problem in the quantitative social sciences that peer review is not solving, and that is when both the author and the reviewers don’t know enough about proper statistical methods. We need publishers to employ (or somehow get volunteers) from outside the discipline but with deep training in statistical methodology to “non-peer review” every submission that has made it through the normal PR process to review the article specifically to look for invalid quantitative methodology.

Interesting, I’ve recently worked with a scientific society that had brought in a huge number of statistical reviewers from outside of their field to help the journal do this sort of review and it has been problematic, as the reviewers’ lack of context and understanding of the research continually confounds authors who are left struggling to respond to out of context requests. What we really need is better statistical training across the board for researchers, which (slowly) seems to be happening. This particular journal’s response has been to replace the non-contextual statistical reviewers with more and more early career researchers who are much better versed in these analyses.

We have the same issues in STEM, made worse by the fact that we cover multiple fields in which researchers present the same information in different formats or units based on what they are trying to convey. One can establish standards for a given field, but trying to do that across a dozen fields is very much like banging that square peg into the round hole over and over again.

As is often the case it depends upon what is meant by peer-review. At well resourced journals virtually all research reports undergo formal internal review, with comments by full-time and/or part-time paid editors. In that case, on a limited basis, research integrity is possible without what most people mean to imply, external peer-review by individuals not related to the journal.

In journals that are less well resourced, research integrity would be difficult to maintain without external peer-review.

Having been an EIC at a very well resourced journal (JAMA) and one less well resourced (ADC) – I always find questions around peer-review limited, because the term is often not defined, and as mentioned above depends upon the resources of the journal.

Comments are closed.