This year the theme is trust in peer review. So we’ve asked the Chefs: What would improve trust in Peer Review?

In peer review we trust protest sign
Image via Sarahmirk.

Robert Harington: There are many interlacing factors that contribute to trust in peer review. I want to highlight one factor for journals across the sciences and mathematics – double-blind peer review.

Most journals in the sciences and mathematics deploy single-blind peer review, with reviewers knowing the identity of the author(s). A journal’s use of a double-blind model reduces bias in the peer review process. While is entirely possible for stakeholders in a relatively small field, or discipline to work out who is who, double-blind peer review introduces a pause that essentially should help reviewers and editors navigate unconscious bias.

When thinking about unconscious bias we are really talking about parts of an author’s identity that may affect, even subconsciously, a reviewer’s response to an author’s work. Such things as nationality, ethnicity (or supposed ethnicity on reading a name), name recognition, seniority, reputation of an institution etc. It really is hard to come up with reasons not to deploy double-blind peer review.

The arguments I hear boil down to questioning whether there is a problem that needs solving, given that there is an extra layer of effort involved in working with double-blind peer review. In mathematics, some may argue that the identity of those in a particular area of research are already known, or may be identified through the preprint server, arXiv. In my book, removing even the slightest hint of unconscious bias is necessary, so while I understand caution when invoking change, the need to change is in response to a basic recognition that we all are vulnerable to unconscious bias. Double-blind peer review helps us navigate to a more equitable approach to publishing and trust in peer review.

Rick Anderson: I guess I’m going to be a pain and push back a little bit on the premise of the question: to what degree is trust in peer review a problem that needs to be resolved? I know there are lots of voices out there questioning the necessity/effectiveness/trustworthiness of peer review as it’s currently practiced, but in my experience those voices tend to be from either outside the core of the scientific research community or on the margins of it. This isn’t to say that no one in mainstream science questions the trustworthiness of peer review, of course – but according to Sense about Science’s 2019 Peer Review Survey, 90% of researchers believe that peer review improves the quality of research, and 85% believe that peer review is essential to maintaining appropriate control in scientific publication.

So what I think we need to find out is whether there’s genuinely a crisis of trust in peer review within the scholarly and scientific community as a whole. If so, then we need to find out what has caused that crisis. I think the answers to those questions will go a long way towards helping us figure out how to resolve the crisis of trust—unless, of course, they tell us that the crisis has been greatly exaggerated.

So what I think we need to find out is whether there’s genuinely a crisis of trust in peer review within the scholarly and scientific community as a whole.

Tim Vines: Peer Review is like public transport – nobody really loves using it, but most users grudgingly recognize that it’s essential for a civilized society. Of course, there’s always others that will hate on it no matter what and given a platform like Twitter they can do a lot to undermine popular support. Sweeping reform to address the haters’ complaints will not turn out well, because the haters rarely understand the details of the process well enough to recommend workable changes. An analogy would be replacing all the buses with electric scooters: the latter work great in always sunny San Diego but throw in some steep hills and heavy snow and they stop being fun very quickly.

Peer Review is like public transport – nobody really loves using it, but most users grudgingly recognize that it’s essential for a civilized society.

That said, practitioners could do a lot to prevent self-harm to the peer review brand. By and large, we know what we need to do: make peer review most consistent, more helpful, and more rigorous. Far too many researchers (particularly in medicine and life sciences) are hopeless at experimental design and statistics, so articles containing data should *never* be accepted for publication without the approval of a proper statistician. That’s a big ask, but there are far too many articles with fatal flaws being published, even by respectable journals. We can’t achieve universal statistical review with person-power alone, so we need better automated systems to triage statistically flawed articles, and better systems for promoting reproducibility.

Journals also need to be able to reject a much higher proportion of articles without fearing backlash from publishers and, ultimately, the subscription paying libraries. Better messaging about the value of a few great articles compared to many bad ones would help here. Publishers would also need to take a more active role in auditing and reforming journals that are not running a thorough peer review process. On the Open Access side, moving away from APCs (which incentivize acceptance) to submission fees (which incentivize a quality review experience) would also be a vital shift. Ultimately, trust in peer review is built when individual researchers have a great review experience (regardless of the outcome); all we need to do is make that the norm.

On the Open Access side, moving away from APCs (which incentivize acceptance) to submission fees (which incentivize a quality review experience) would also be a vital shift.

Jasmine Wallace: In the trust relationship between journal publisher, author, and community, evaluation of oneself is a great first step to improve trust in peer review. Taking time to carefully assess our roles and responsibilities ensures we remain trusted. We should take into consideration our collective identity and organizational needs and evaluate them alongside those of the community; especially as it relates to systemic structures. We should keep in mind that trust is not one-sided and that all parties are held accountable in contributing to the overall benefit of the ecosystem. In the fiduciary relationship of financial trusts, each party, the settlor, beneficiary, and protector all work together to increase their collective resources.

If we were to apply the Trusts model as a trust model in the publishing space, similarly we would identify the roles as follow: Authors are the settlor(s), or, the individuals who have legal ownership of their intellectual property and right to being equitable with that property. The community on the other hand are the beneficiaries, or, the individuals who benefit from the property and are “expected” to be equitable. The publisher is therefore left as the protector, or, the party who is responsible for preserving intellectual property. And to make sure all legal obligations are upheld and that everyone included in the relationship is given what they are needed in an equitable way.

In order to ensure we are upholding our end of the bargain, publishers must honor the power being granted and then work to better secure the property we protect. Make sure nobody misuses the property. Make sure all parties are protected. Make sure all parties are being honest (no stealing, misrepresenting, spreading false truths, etc.) and therefore, laying a foundation that can readily be built upon. Above all, we have to ensure we are not abusing the relationship by being overprotective and failing to add to the trust quotient in our relationship. So while we may be tempted to do the following:

  • providing constant surveillance and restrictions
  • encouraging safety and dependence over autonomy and exploration
  • knowing “what’s best” for the authors and community void of reliable data

By steering clear of the behaviors outlined above, journal publishers can limit their overprotective nature and extend trust to their authors and communities. To keep the trust going, publishers should be acutely aware while developing new policies, practices, and procedures that anything overdone, or, not carefully considered can have adverse effects. Overall, yes, it is the journal publishers’ responsibility to protect our authors and community from anything that could jeopardize their well-being. However, we must be aware to not allow these things to become the very things that work to dismantle trust in peer review.

To keep the trust going, publishers should be acutely aware while developing new policies, practices, and procedures that anything overdone, or, not carefully considered can have adverse effects.

Lettie Conrad: Trust is a delicate component of publisher relationships with user communities, especially when we rely on the exchanges of value and credibility that are critical for completing our core mission or goals, as is true for peer review. Publishers trust editors to facilitate high-quality articles; reviewers trust authors to contribute authentic ideas and analysis; authors trust everyone involved to develop and represent their work with integrity. When it comes to supporting peer review interactions that engender loyalty and honesty, my mind turns to the opportunities to build and maintain trust in these editorial relationships via pleasing, productive digital information experiences.

Publishers trust editors to facilitate high-quality articles; reviewers trust authors to contribute authentic ideas and analysis; authors trust everyone involved to develop and represent their work with integrity.

This requires remembering the human components of the research process and serving personalized, time-efficient means of interacting. Trust in peer review includes attending to the big and little ways we are investing in our relationships (or not) with editors, authors, and reviewers. Do we present opportunities to engage with our journals in ways that express respect? Do we recognize that ours is one of thousands of interfaces users encounter? Do we make an effort to save busy experts time and frustration? We can improve trust in peer review with small, daily gestures, such as clear communications and sign-posting of tasks that arise in the review process.

We can improve trust in peer review with small, daily gestures, such as clear communications and sign-posting of tasks that arise in the review process.

Of course, policies and procedures that ensure ethical review transactions are foundational. But, when building the workflows that enable reviews, establishing and safeguarding trust should be part of the digital experiences we design for authors, editors, and reviewers — and should be considered part of our everyday work we do to facilitate the publishing phases of the research communications lifecycle. As someone who wears the author, editor, and reviewer hats throughout my daily scholarly journeys, I have come to invest in those journals that demonstrate respect for my work and engender trust throughout the publishing process.

David Smith: Well I suppose the answer depends rather on the issues of the person asking the question.

What concerns me personally here is not the arcane procedural discussion about whether the reviews ought to be anonymous or not or that sort of thing, but whether in fact there’s a more fundamental challenge to the business of improving the signal to noise ratio of research outputs:

Bad Actors.

I have been following Elizabeth Bik (@MicrobiomDigest on Twitter and on Science Integrity Digest) and her work looking at image manipulation in published research. What she is uncovering is frankly very worrying indeed. She has clear examples of manipulated research results. Gels; cell images; blots; graphs; plots; signal traces, you name it, she has examples. Examples that CANNOT be explained away by a desire to present nice clean results. In my opinion, these are examples of individuals and maybe groups, deliberately engaging in scientific fraud. And these fake results — fake science, are making it into journals of all levels.

…predicated as it is on the premise that peers are presenting their research in good faith. It’s not realistic to expect reviewers to be experts in identifying such manipulations.

Peer Review does not currently handle this, predicated as it is on the premise that peers are presenting their research in good faith. It’s not realistic to expect reviewers to be experts in identifying such manipulations. And across the board, we currently lack the tools to handle this sort of bad faith behavior. This needs to be fixed. Personally, I think scholars who are caught doing such manipulations should be permanently banned from future participation in scholarly research, but such a position requires that the tools and processes are in place to robustly and fairly examine such actions and apply the punishments in the right places. If this isn’t fixed, we’ve got a big problem corroding the very heart of scientific literature. It’s the equivalent of doping in elite sport — if you cannot trust what you are seeing or understanding is true, then the value of all of it is called into question. And as this year has shown repeatedly, we do not need any more of that.

Haseeb Irfanullah: Scholarly publishing is all about trust. Editors trust authors to submit authentic research to publish. Authors trust editors to send manuscripts to competent peer reviewers. Editors and authors trust reviewers to make constructive, unbiased suggestions, and, of course, to send the reviewer’s comments on time. Reviewers trust authors to appreciate their comments and act accordingly. And, a peer reviewer also trusts that someone, somewhere in the world will review her manuscript, as she voluntarily did the same for unknown fellow researchers.

So, with a system so much based on trust, why do we need to discuss trust in the peer review process? Peer review-less predatory journalsretraction of published papers, and a ‘reproducibility crisis’ in research are among the reasons to be blamed for shaking our trust.

All these issues are, however, to be tackled by scholars, their institutions, and scholarly publishers. We can make them aware of these issues, we can build their knowledge, expertise, and skills to avoid breaching trust, and we can even punish them, if they fail to comply.

But, if we look beyond the scholarly world, do the non-academic mass understand the peer review process? Do they appreciate the service peer reviewers offer to make research authentic, trustworthy?

In August 2019, a survey by the Pew Research Center showed uncomfortably low trust of American people in scientists. In case of research findings, about 60% Americans would trust more if the data were publicly available. More than half respondents said research findings are more trustworthy if they were reviewed by an independent committee.

In case of research findings, about 60% Americans would trust more if the data were publicly available. More than half respondents said research findings are more trustworthy if they were reviewed by an independent committee.

For professional reasons, I have been following climate change discourses over the last 12 years or so. We saw and still are seeing how politiciansoil companies, even governments run propaganda to prove climate change a myth, by presenting or mis-presenting counter research. We have also been seeing, during this COVID-19 pandemic, scientific evidence is overlooked, even laughed at by the top leaderships.

So, while we discuss how to improve trust in peer-review process, we should also ask ourselves how to improve trust in research and researchers. We need to relentlessly educate and re-educate people why we need to rely on scientific processes, despite their limitations and the uncertainties around them. And, if we cannot make scientists lead our governments, let us at least put some politicians in power who will act on scientific evidence.

Charlie Rapple: I’m by no means an expert in peer review and may well be behind the times in terms of improvements that have already been made; I particularly have no experience of peer review in the harder sciences. But from my perspective of having published in and reviewed for ‘industry journals’ I have pondered two possible improvements: a checklist provided to reviewers and published alongside the article, and an indication of the reviewer’s experience / qualifications for reviewing.

The checklist would both guide the reviewers, and inform the authors / readers, as to the scope and nature of the review. For example, years ago I co-authored an article that summarized a piece of market research I had been involved with. It included lots of survey results (X% of respondents said Y) and interpretation of that. We did not provide the data for review, so the reviewer would have been taking my statistics on trust — that’s the sort of thing that perhaps could be highlighted as part of a ‘nature of review’ checklist – “Have the data points in this article been validated?”

But from my perspective of having published in and reviewed for ‘industry journals’ I have pondered two possible improvements: a checklist provided to reviewers and published alongside the article, and an indication of the reviewer’s experience / qualifications for reviewing.

I’ve also reviewed several articles over the years and — having learned from the people who’ve reviewed my own articles over that time (many thanks to you all!) — I have become much more stringent as a reviewer. I question data points, challenge interpretations, propose amendments to wording to separate opinion or extrapolation from what the data actually shows. Thinking back to the earliest articles I reviewed, I did none of this. I didn’t know that I was supposed to! A checklist could have steered me to provide a more rigorous review.

In terms of the reviewer’s experience, I think authors / readers and editors would all benefit from an indication of how well the reviewer has been ‘trained’. When I started reviewing, I’d had no training at all. I was given “notes for reviewers” but these were more focused on explaining the process (how to capture and submit your feedback) and on things like house style. My sense of a ‘good’ review has come from the reviews I’ve had of my own work since. Maybe some journals do train potential reviewers as a matter of course? It would be great to surface this, if so, along with any self-guided study, such as that provided by the Publons Peer Review Academy. Publons is great for providing some indication of the reviewer’s level of experience (e.g., how many reviews completed, for how many different titles, over how many years) — I think it would be beneficial if ’training / qualifications’ could be listed there, and for that information to be made publicly available (anonymously, if needed) alongside every article.

Karin Wulf: Thinking about trust as providing ballast for the entire review process — for the author, their colleagues, editors, reviewers, publishers, disseminators and the public — it is awfully hard to isolate one thing that would improve it. Maybe it is the intensity of this pandemic year, but from my vantage one thing that can enhance trust is reminding us all how much we rely on one another, and one another’s expertise, to do our jobs and to advance knowledge. From the perspective of a reviewer, something that helps to reminds me of my role in this process is when the editor communicates with me about my review.  Not always, but often an editor will let the reviewers know the outcome of a submission; sometimes they will even summarize all of the reviews (blinded) so I have a sense of how my review contributed. Maybe especially under duress, making your place in the process explicit underscores our interdependence, and the importance of mutual responsibility and mutual trust.

Maybe it is the intensity of this pandemic year, but from my vantage one thing that can enhance trust is reminding us all how much we rely on one another, and one another’s expertise, to do our jobs and to advance knowledge.

Alice Meadows: I’m a firm believer that the best way to improve trust in peer review — or pretty much everything — is by increasing transparency. Make it easy for people to find information about the peer review process, and make sure that information is clear, accurate, and comprehensive. Having contributed to and helped research publication options for a couple of articles recently, I can say from firsthand experience that, at least for journals submissions, this is definitely not always the case!

Frequently you have to search quite hard to find the submission information and, when you do, it is often lengthy and full of jargon. It’s also, of course, typically directed at authors, but I would argue that it is just as important — if not more so — that readers also understand how the research they’re reading has been reviewed. How many reviewers were there? How were they selected — were they recommended by the author(s), hand picked by the editor, or identified via an algorithm? What are the minimum requirements for a review — is it a simple check box exercise or are reviews more extensive? And don’t assume open peer review is transparent just because it’s open! For one thing, there’s no clear definition of open peer review and, even within a single publisher or journal, there can be multiple variations if, for example, reviewers are given the option of whether to sign and/or publish their review. These are just a few examples of the type of information that I, as both author and reader, would like to know.

But all forms of peer review should be similarly transparent in terms of their processes — from hiring decisions, to grant applications, to conference submissions, to publication, and beyond.

I’m in the privileged position of 1) being a native English speaker, 2) having worked in publishing so having a good understanding of the process, and 3) not working in a technical field, where the guidelines can be much more complex. This is not the case for many, if not most, of your authors and readers! Last but not least, I’ve used journal publishing as my example here, as it’s the workflow that I’m most familiar with. But all forms of peer review should be similarly transparent in terms of their processes — from hiring decisions, to grant applications, to conference submissions, to publication, and beyond.

Phill Jones: Peer-review, like so many other components of the scholarly infrastructure suffers from having far too much expected of it. Much like other quality control and assessment mechanisms like grant reviews or the impact factor, the stakes that are associated with the outcomes have risen dramatically as competition in academia for increasingly scarce resources has reached irrational levels. The result is a disconnect between frustrated researchers who see career making and breaking decisions treated with a level of rigor unequal to the stakes and journal editors who protest that they really are doing the best that they can.

…peer-review was never intended to make the kind of judgements that are being asked of it.

Of course, both are correct because peer-review was never intended to make the kind of judgements that are being asked of it. It was never designed to detect inappropriate analysis workflows and poor use of statistics, although some journals have specialist reviewers to support in these areas. It certainly isn’t capable of spotting mistakes in data processing or outright fraudulent manipulation of data. Even in areas where there is enough information in the manuscript for a peer to make an assessment, the busy schedules of most reviewers means they simply don’t have enough time to find all the problems. Simply put, expecting researchers to perform the vital work of quality control of the scholarly record ‘off the sides of their desks’ is unrealistic.

How can this situation be helped? Let’s stop asking peer-review to do everything and build processes into scholarly workflows that provide feedback and correction at more appropriate stages. Going further, let’s make this a core part of daily work, rather than an afterthought to be done during a spare 20 minutes. Open research approaches can really help here by giving researchers earlier feedback on protocols, analysis algorithms, data sets, hypotheses, preprints and so on. When we stop asking so much of peer-review, we might feel more comfortable that it can fulfill its purpose.

Let’s stop asking peer-review to do everything and build processes into scholarly workflows that provide feedback and correction at more appropriate stages.

Todd Carpenter: As someone who has been a reviewer on a number of papers and serves on the editorial boards for a handful of publications, I’ve reviewed a fair number of articles and proposals. Reviewing can be an interesting process, but it is by no means an easy one. Particularly as domains grow, as the amount of literature expands, and the analytical approaches to solving problems become more complicated, peer reviewing a paper is growing in complexity and commitment.

Back in the days when we used to spend considerable time in cabs going to airports outside of cities, I was speaking with a researcher who once who described their vetting process. It came up because they described having to read over some dozen or more papers a month.

Given how long I spent reading and reviewing a paper, I thought how could someone spend 12 times as much time doing peer review over the course of a month. What was I missing? So I posed the question, “How long do you spend reviewing a paper?” Let’s just say, I was doing it wrong, or at least inefficiently, or too thoroughly. Or perhaps, I have an outdated notion of what peer review should or could be in a high-throughput environment. A 2018 study published by Publons and Clarivate reported the median time spent on the peer review article was 5 hours in 2016. Clearly not everyone is taking as long as I am, but certainly others are spending far less time than I. This drew my attention to what are the component elements of a review and what is required to qualify as having “gone through peer review”.

A 2018 study published by Publons and Clarivate reported the median time spent on the peer review article was 5 hours in 2016.

For its importance in publishing, peer review is an amazingly un-standardized term and process. There are some several dozen different varieties of publication review that range from editorial review by publication staff to double- or triple-blind peer review, along with open and closed versions of each. What isn’t clear among all these varieties is clarity on what exactly is included in the process, who is selected to review the paper, or the expectations or results of the response to comments. Some reviewers are conducting a thorough review of the entire paper, its processes, its conclusions, while others are conducting more cursory reviews of the abstract, the conclusions, the methods and who is included in the references. Even the terminology in use about peer review has been inconsistently applied or understood. While peer review has existed for well over a hundred years, and became widespread in the mid-twentieth century, it wasn’t until July of this year that a taxonomy of peer review was released by an STM working group.

Some reviewers are conducting a thorough review of the entire paper, its processes, its conclusions. While others are conducting more cursory reviews of the abstract, the conclusions, the methods and who is included in the references.

Clarity of expectations, communication about those expectations, recognition for, and potentially metrics around those contributions would go a long way to improving peer review. Perhaps there’s potential for standards around this… (but you all knew I would say that, didn’t you!)…

___________________________________________________________________________

Now it’s your turn? What do you believe would improve trust in Peer Review?

Ann Michael

Ann Michael

Ann Michael is Founder and CEO of Delta Think, focused on strategy and innovation in scholarly communications. Throughout her career she has gained broad exposure to society and commercial scholarly publishers, librarians and library consortia, funders, and researchers. As an ardent believer in data informed decision-making, Ann was instrumental in the 2017 launch of the Delta Think Open Access Data & Analytics Tool, which tracks and assesses the impact of open access uptake and policies on the scholarly communications ecosystem. Additionally, Ann has served as Chief Digital Officer at PLOS, charged with driving execution and operations as well as their overall digital and supporting data strategy.

Robert Harington

Robert Harington

Robert Harington is Associate Executive Director, Publishing at the American Mathematical Society (AMS). Robert has the overall responsibility for publishing at the AMS, including books, journals and electronic products.

Rick Anderson

Rick Anderson

Rick Anderson is University Librarian at Brigham Young University. He has worked previously as a bibliographer for YBP, Inc., as Head Acquisitions Librarian for the University of North Carolina, Greensboro, as Director of Resource Acquisition at the University of Nevada, Reno, and as Associate Dean for Collections & Scholarly Communication at the University of Utah.

Tim Vines

Tim Vines

Tim Vines is the Founder and Project Lead on DataSeer, an AI-based tool that helps authors, journals and other stakeholders with sharing research data. He's also a consultant with Origin Editorial, where he advises journals and publishers on peer review. Prior to that he founded Axios Review, an independent peer review company that helped authors find journals that wanted their paper. He was the Managing Editor for the journal Molecular Ecology for eight years, where he led their adoption of data sharing and numerous other initiatives. He has also published research papers on peer review, data sharing, and reproducibility (including one that was covered by Vanity Fair). He has a PhD in evolutionary ecology from the University of Edinburgh and now lives in Vancouver, Canada.

Lettie Y. Conrad

Lettie Y. Conrad

Lettie Y. Conrad is a publishing and product development consultant, working as a senior associate with Maverick Publishing Specialists, as well as with a portfolio of independent global clients. When she's not bringing a user-centered approach to scholarly content discovery and accessibility, Lettie serves as North American Editor for Learned Publishing and is a part-time information science doctoral student via a remote program at Queensland University of Technology.

David Smith

David Smith

David Smith is a frood who knows where his towel is, more or less. He’s also the Head of Product Solutions for The IET. Previously he has held jobs with ‘innovation’ in the title and he is a lapsed (some would say failed) scientist with a publication or two to his name.

Haseeb Irfanullah

Haseeb Irfanullah

Haseeb Irfanullah is a biologist-turned-development practitioner, and often introduces himself as a research enthusiast. Over the last two decades, Haseeb has worked for different international development organizations, academic institutions, donors, and the Government of Bangladesh in different capacities. Currently, he is an independent consultant on environment, climate change, and research systems.

Charlie Rapple

Charlie Rapple

Charlie Rapple is co-founder of Kudos, which helps researchers, publishers and institutions to maximize the reach and impact of their research. She is also Treasurer of UKSG and serves on the Editorial Boards of Learned Publishing and UKSG Insights.

Karin Wulf

Karin Wulf

Karin Wulf is Director of the Omohundro Institute of Early American History & Culture and Professor of History at the College of William & Mary. She is a scholar of early American and Atlantic history working on gender, family and sexuality.

Alice Meadows

Alice Meadows

Alice Meadows is NISO's Director of Community Engagement, responsible for engaging with and developing our member community. She was formerly Director of Communications and Director of Community Engagement at ORCID; and before that, she worked for many years in scholarly publishing, including at Wiley and at Blackwell Publishing.

Phill Jones

Phill Jones

Phill Jones is the owner and principal consultant at Double L Digital which is a research, technology and management consultancy. He works with publishers, startups, institutions and funders on a broad range of strategic and operational challenges. He's worked in a variety of senior and governance roles in editorial, outreach, scientometrics, product and technology at such places as JoVE, Digital Science, and Emerald. In a former life, he was a cross-disciplinary research scientist at the UK Atomic Energy Authority and Harvard Medical School.

Todd A Carpenter

Todd A Carpenter

Todd Carpenter is Executive Director of the National Information Standards Organization (NISO). He additionally serves in a variety of leadership roles of a variety of organizations, including the ISO Technical Subcommittee on Identification & Description (ISO TC46/SC9), the Linked Content Coalition, and the Foundation of the Baltimore County Public Library.

Discussion

12 Thoughts on "Ask The Chefs: Improving Trust In Peer Review"

Wow, this must surely be one of the longest Ask The Chefs pieces we’ve had in a while? So many different angles and suggestions, but also sufficient overlap, that I found myself compiling a TL;DR summary which I’ll share here in case anyone else finds it useful!

1. Clarify what peer review is, and what it isn’t (should it be detecting bad actors, or validating statistical analysis?)
2. Define standards around this; codify the responsibilities of each party; audit / evaluate journals
3. Train reviewers in line with these standards and responsibilities
4. Signal to readers (and the public in general) about the nature / scope of the review that has taken place
5. Update reviewers about “what happened next” to support continuous improvement (and a sense of personal agency / accountability)

Within those, consider:
a. Making everything double-blind peer review
b. Ensuring any article containing data is reviewed by a statistician
c. Moving from APCs (which incentivize acceptance) to submission fees (which incentivize a quality review experience)
d. Improving the interfaces and workflows to better respect the time of those engaging with them

All of the above would of course make the publishing process even more expensive at a time when the costs of publishing are already such a cause of consternation / controversy. But precisely because the landscape of publishing is fragmenting, there is value in codes of practice, badges etc that provide greater transparency and consistency of meaning.

Thank you!! I was suffering and you saved me from needing a fourth cup of coffee to get through it.

I fully agree with Robert Harington’s opinion – so many issues could be just solved by making the double-blind peer review a standard, or even better, by revealing the names of reviewers after a completed review process. The single-blind review standard is giving the reviewers too much power and, unfortunately, often makes them biased. While some strong arguments can be raised against the fully open review, I also cannot think of anything against the double-blind peer review!

Remarkably, only Alice noted ‘transparency’ in the context of trust – but lets go further than transparency about process parameters: if the journal believes in its refereeing process, publish the reports (without names is just fine: the evidence matters, not the names) and alongside it the whole editorial communication with the authors to show if and where there are biases and mistakes. Haseeb noted in a different context ‘about 60% Americans would trust more if the data were publicly available’ – well, extrapolate to referee reports & editorial process. Thankfully, in the life science transparent peer review is becoming a standard – well over a decade after the first journals started this.
I was amazed that nobody discussed the importance of referee credit: if someone’s qualities as a referee were baked into their research assessment, they would certainly give the process the attention it deserves, and the authors of all the excellent reports that I see every day definitely deserve the credit for it. If peer review is a core part of the scientific process, make it count for research assessment!
I agree with Phil that standard peer review has to be supported by a separate layer of quality control – and ultimately data curation – both on technical details (reporting standards, data/methods transparency, statistics) and research integrity (what Elisabeth Bik finds is universal – anyone who cares to look will see it everywhere, but referees cannot be expected to look systematically). All this costs, but if journals are not prepared to do this, well, lets close shop and preprint, as that is almost as good and much faster than many journals out there.
PS: if you are interested in transparent peer review & journal independent peer review check out http://www.embopress.org & http://www.reviewcommons.org

Interesting read ! I will say that all the Chefs are somewhat highlighting the same point about Trust in Peer Review, which are the roles in the Publishing scholarly community. Trust comes with the way of handling matters in a span of time and that is why as an author one trusts those publishers / journals that are adhering & maintaining high ethical research standards, indexed in renowned databases and have all principles in place for providing authentic source of research content. As far as Peer Review is concerned good reputed publishers are trusted by authors for solid & unbiased Peer Reviews & Editors’ role in getting everything right done in the give point in time. Similarly, Small publishers are practically trusted a bit less in peer review. Hence, in the age where everything is digital and open access, I believe Trust in Peer Review is more powerful than ever because one has multiple resources for all roles(Publisher/Editor/Reviewer/Author) to keep that trust built up. It trickles down to only ONE aspect & that is how much faithful one is to his/her job in the role and how you are setting an example to the world to keep on building the Trust in Peer Review. The whole Publishing cycle is responsible for building the trust in Peer Review & setting good examples to make it a good practice. Ofcourse, Transparency in Peer Review is adopted for building the Trust in Peer Review, which also mentioned by Alice Meadows.

I agree with Rick, though I would state it more strongly. This is a non-problem. Trust only becomes an issue when you try to universalize it, but in practice trust resides in specific communities. Researchers know (and trust) peer review, and make intelligent allowances for its limitations. For everybody else there is Fox News.

If we’re talking about the perceptions of the general population (say, related to the current antiscience vibe going around), it might make sense to cultivate a clearer understanding of what peer review is tasked to do and able to do.

I had a friend post some popular press reporting on some scientific study (the details of which I don’t remember, but I think it was of the “. . . in mice” variety) and someone else responded with skepticism. My friend replied back, “but it was in a peer reviewed journal so it’s true.” I think this idea is fairly common perception, and it can lead to some problems. If one sees peer review as screening for “truth,” how does one understand conflicting peer reviewed sources? Perhaps by just deciding the whole thing is a scam.

As Bernd Pulverer notes (and based on the practices implemented at the EMBO stable) transparency is key to confidence in peer review and publishing the reviews and process is an effective means to achieve that. It’s a bit of extra work but if reviewers are aware their reviews will be public (anonymized if needed), they’ll perhaps be less likely to include egregious statements, overreach in experimental revisions and stop pushing authors to cite their own work.
And, stop the presses, but scientists aren’t necessarily wedded to peer review in its current form. We’ve all seen it abused and the increasing disparagement of preprints for not being peer reviewed (at least when posted) seems Luddite-like. There’s a role for rapid release of manuscripts as well as considered review.

re. preprints: point well taken: in fact, I would argue that peer review as currently practiced (3 refs/paper, multiple rounds of review) is not scaleable at current publication volumes and given serial submission/rejection cycles (at least in the biosciences). Either scientists are incentivized to spend a good 10% working hours on peer review or we publish most work as preprints clearly marked ‘not refereed’ and focus on reviewing properly maybe 20% of the key papers.
Publishers can add value to all papers in other ways and can thus embrace this change without necessarily loosing out.

How about thinking way, way outside the box? Maybe academia is completely on the wrong track with how it 1. assesses and incentivizes faculty performance, 2. enforces a culture of competitiveness rather than cooperation, and 3. prioritizes and pursues specific research agendas over others without regard to what is truly needed by society. Peer review is one small widget on a ship that is headed for an iceberg. It cannot save us from what is coming and what has been ultimately caused by technology, informed by science, with no regard for any kind of shared mission. Fiddling on.

Comments are closed.