Sharing the value of peer review with the public is a central part of Sense about Science’s work in helping people make sense of science and evidence, and they have been involved in Peer Review Week from the very start. In this interview, their Director Tracey Brown, OBE, explains why peer review is so important to the organization, and shares her thoughts on how we can better educate people about its value and improve its quality.
Can you tell us a bit about Sense about Science — when and why the organization was founded and how it’s grown since then?
Sense about Science was founded in the UK, in the maelstrom of heated debates about evidence that dominated the early noughties. Vaccines, mobile phone radiation, GMOs, nutrition, crime and punishment, cancer screening, drops in child literacy, alternative medicine, research fraud… Our remit was to advance the public interest in sound science and evidence. From firefighting these kinds of issues, we quickly began to see repeated patterns and moved our attention to the system and culture changes that would stem them or empower people to ask searching questions about the nature and quality of information behind them. We now run global campaigns, such as AllTrials (to get all clinical trial results reported regardless of outcome), involving tens of thousands of people in advancing that cause and making the case for clarity and accountability with respect to evidence.
Sense about Science has been involved in Peer Review Week from the start. Why is peer review so important to you?
One of the first things that struck us, surveying the science and society landscape 15-20 years ago, was how little attention was given to the status and quality of any information claim. You had people on the radio promoting ‘research’ showing mobile phones caused cancer — that was research that had never been published so no-one could check it or know the basis of the claim. There were constant announcements of breakthroughs and medical miracles, from diets to alternative medicine. You had policy makers treating research papers as opinions when they consulted on issues and risks, and opposite camps in everything from climate change to child development advancing their own sets of facts.
Amid all that, we found it astonishing that so little attention was given to explaining that scholarly research papers were different in nature. Unlike a think tank’s press release or politician’s statement or news headline, it was expected that research findings should be subjected to the scrutiny of peers before being published. Knowing that research has gone through this process doesn’t tell you the results are right, just as a manufacturing standard doesn’t mean your washing machine will always work. But it does help you to situate and accord weight to the claim being made, and, crucially, to find out more about how it was arrived at. It also means the authors have worked knowing they will be exposed to scrutiny. All of that’s important to the public, to anyone who is taking a view or making a decision. That’s why it’s important to us. It is also why, 10 years on from our landmark peer review survey with Elsevier, we have asked the global research community what they think is happening to research quality today.
Why do you think peer review is so central to so many research workflows — not just publication, but also grant application, conference abstract submission, and more?
In terms of the research itself, I think there are fundamentally two ways that systematic peer review can improve its quality — and by quality I mean its ability to answer the question it is addressing. The first is that researchers anticipate critical scrutiny by their peers from the point of designing the research through to reporting it. This means they start asking themselves critical questions about how well they’ve designed an experiment or the size of their sample or whether their findings are reliable — well before they submit a grant application or conference abstract or paper for peers to look at.
Then there is the direct effect of reviewers’ comments on how researchers conduct and present their work. I once asked the late Sir John Maddox, who edited Nature for 22 years and was one of our founding board members at Sense about Science, what difference he thought reviewers made. He said it was hard to think of a paper that had not been improved by the comments of reviewers, but at the other end of things he could think of a good many that had been utterly transformed. And just ask any grant reviewer how often they spot a significant methodological flaw in a proposal.
But these are peer review’s effects, not its driver. The reason it’s so systematically applied in research is as a selection process. It helps a funder decide what to fund, as well as an editor decide what to publish or a researcher choose what to read. We don’t really have to think for very long to imagine how for the rest of us, in this age of information and misinformation — and so much information! — knowing whether something has been subjected to critical scrutiny is pretty valuable.
Do we need to do more to educate people about why peer review is so important to the research process? What sorts of tactics have you found work best?
We absolutely do! It’s important for all of us to know what the quality checks are for research. If you were marooned with your family on a desert island surrounded by shellfish, wouldn’t you want to know which ones the fishmongers would choose and why? That’s kind of what it feels like for most of society to be at an interface with research.
Perhaps the most important tactic for getting people outside of research to pay attention to the peer review system has been getting researchers to care about it. Back in those early days, when evidence and allegations were being traded in science and society debates on radiation or climate or GMOs, we found it strange that few thought to talk about questions of quality and reliability. One reason may have been that researchers, like many professionals, like to moan about the system they work with. (Try talking to lawyers about the justice system…) Inevitably there are places where a human judgement system, as peer review is, doesn’t deliver a good result; where a huge global system, as peer review also is, gets overloaded or in need of modernization. But that shouldn’t blind us to its value.
We have managed to change that defensiveness, equipping early career researchers in particular to talk with confidence about the strengths and weaknesses of the system, and the kinds of initiatives that could deal with those weaknesses and bring it into the 2020s. Initially this was by engaging different groups of researchers and publishers in writing and disseminating I Don’t Know What to Believe – a short guide explaining the peer review system starting from the perspective of someone confronted by myriad conflicting claims. We began with just 10,000 copies, a popularization plan involving thousands of researchers and a lot of skeptical eyebrows in the scholarly publishing community. We planned to get people asking ‘is it peer reviewed?’ as a starter question, and we did. Here we are 10 years or so later with millions of downloads, including in Mandarin, and it’s just about to be revamped again. Journalists, parliaments, schools curriculum bodies, nonprofits, government procurement agencies, community action groups — we couldn’t have imagined how widely it would be used to explain and understand what published research is about. It now forms part of a wider program on understanding quality issues, which we’ve recently extended to data science.
At the moment there’s no standard way to measure or define the quality of peer review for any given publication. Is this something we should be addressing and, if so, why and how?
Yes, but I am wary of people getting lost in the complexities of that and watching another year go by in the search for a global consensus about the terms to use and what qualifies. I’d like to see some simpler measures and leadership. Most immediately, a peer review statement with each paper and, most importantly, in any publicity about it. This would state what has and hasn’t been reviewed. It’s not unreasonable that people, especially people outside of academia, would assume the supplementary data provided by researchers had been looked at or what a ‘light touch’ review has not covered.
Different types and extent of peer review are not new issues but digital and high volume publishing have led to greater diversity in standards and more of the ‘light touch’ approach. Regardless of what anyone might think about the suitability of that approach, clearly we need to know what has been looked at.
On the other side, thinking about the users and mediators of the research, we need to put our heads together to extend ‘is it peer reviewed?’ into a further set of questions that people can ask about the quality and extent of review. That’s a harder task but we are looking forward to working with publishers and researchers on it.
What one thing could we do to improve the quality of peer review in future?
Human judgement systems have to flex to reflect the needs of the times and what is practically possible to do. The best thing we can do for peer review is ensure that concern about its quality and execution are always a live consideration in the research community. And the best way to do this is to equip the next generation of researchers with the confidence to become part of it as soon as they start publishing and with the understanding that the system is a public good that they are charged with protecting.
2 Thoughts on "Quality in Peer Review: An Interview with Tracey Brown, Sense about Science"
Yes, peer review is a selection process that “helps a funder decide what to fund, as well as an editor decide what to publish or a researcher choose what to read.” However, there is a stark distinction between review for publication and review for funding that should not be glossed over. While Ms. Meadows remarks seem appropriate for the former, they often do not apply for the latter. The public should not be persuaded to think otherwise.
In one case, reviewers are confronted with something material, a publication. Their ranking of that publication may turn out to be wrong, but their judgement is mainly objective. In the other case, they are confronted with an interpretation of past publications and ideas about how those interpretations might be explored in the future – a grant application. Here, there is much subjective influence.
Almost by definition, the best ideas are difficult to think up, communicate and understand. If not, everyone would have already thought of them! This does not apply to ideas that only marginally advance our understanding. Sadly, knowing that their ideas are unlikely to be understood, those with the best ideas have to discard them and enter the marginal advancement funding arena in the hope that they will later be able to divert funds to work that they really want to do. In other words, they must be dishonest! As pointed out in Nature by Leigh Van Valen in 1976: “The norm of our science remains dishonesty, because it is made necessary for the survival of creative research. Often one may either be honest, or continue in science, but not both.”
Although I generally support peer review, editorial review is sometimes as good or even better.
Consider this, not one of Einstein’s papers in his miracle year underwent peer review. They were all editorial review.