Next up this Peer Review Week is a guest post by Dr Bahar Mehmani, Reviewer Experience Lead in the Global STM journals at Elsevier. Bahar is also Co-chair of the Peer Review Week 2020 Events and International Outreach committee, and Vice-chair of the peer review committee and council member of the European Association of Science Editors (EASE).
Jeffrey Unerman’s paper “Risks from self-referential peer review echo chambers developing in research fields” was published in the British Accounting Review* on 2 April, 2020. Its topic fits very well with the theme of this year’s Peer Review Week — trust in peer review — so I was delighted to have the opportunity to interview Professor Unerman for The Scholarly Kitchen.
Please can you tell us a bit about yourself – your academic background and research interest?
I’m Professor of Sustainability Accounting at Lancaster University’s Management School in the UK. Since the mid 1990s my research has focused on how accounting can help organizations incorporate social and environmental sustainability factors into their decision-making.
I entered this area when it was a very small and niche subfield of accounting research. Also, a lot fewer organizations than at present seemed to be aware or concerned about their non-economic responsibilities to society, or how social and environmental factors affected their long-term prospects. The field has changed considerably over the past 25 years and sustainability is now increasingly regarded as a major element of both academic research and accounting practice. I was very fortunate to enter this field in its early stages of development.
I am also actively involved in a number of engagement initiatives with policy-makers and the accounting profession in sustainability accounting and accountability.
In addition to my own research, I have also been embedded in the broader accounting and finance research community for many years. For example, I am a past president of the British Accounting and Finance Association, the UK’s academic association for accounting and finance. I was also honored to be awarded a fellowship of the UK’s Academy of Social Sciences in 2018.
How do you see the role of academics in the world of post-truth politics and populism?
An answer to this question needs to be set in the context of the role academic research plays (or should play) in normal policy-making. My perspective on this is that sound public policy is usually based on high-quality evidence of the type that academics produce. We would expect there to be a range of sometimes competing academic insights that need to be brought to bear in developing any policy. It is the role of policy-makers, including politicians, to test and probe academics in deciding the balance among overall evidence upon which they are going to base decisions. Policy-makers therefore need to have a reasonable understanding of the basis of academic research studies, so they can understand what the evidence is really telling them, as well as a respect for dispassionate evidence, and an ability to engage with and effectively probe and challenge this evidence.
However, the world of post-truth politics and populism is characterized by politicians who often do not seem to be interested in evidence produced by experts. Rather, they tend to dismiss the role of academics and base their populist policies on what they perceive the electorate want to hear (sometimes referred to as fake news) rather than on high-quality evidence. This clearly risks diminishing the role of academics in the world of post-truth politics and populism.
To maintain our relevance in such a world, academics need to work even harder to demonstrate the importance of reliable evidence as a foundation for effective public policy. In some countries, the coronavirus pandemic appears to have resulted in some (but by no means all) populist politicians coming to realize that natural phenomena do not respond to populist pronouncements of politicians. A problem, however, appears to be that these politicians don’t really know how to probe and evaluate academic evidence in deciding upon which policies to adopt.
What is unconscious confirmation bias? Why should the academic community be aware of it? And how can it damage our body of knowledge?
When we act with academic integrity, we seek to avoid and control for any conscious selectivity bias in evaluating the evidence that we use to reach conclusions in our academic studies. However, in addition to these conscious biases, all of us will also be subject to unconscious bias leading to selectivity in perception and evaluation of evidence. This unconscious selectivity bias is known as confirmation bias.
As, by definition, we are not aware of our subconscious confirmation biases, it is problematic to seek to avoid or control for them in our evaluation of research evidence. Unchallenged, we are likely to remain unaware of how they are affecting our reasoning and leading to unconscious selective attention to different elements of the data we observe.
Depending upon how strong our confirmation bias is on any particular matter, it can clearly have a stronger or weaker impact on the impartiality and reliability of the insights produced from our research, damaging our body of knowledge. This damage could ultimately lead to a loss of trust in our academic evidence base. As an academic community, I believe that we need to be more aware than we are at present of this unconscious confirmation bias, and seek to develop mechanisms to challenge it, to help us maintain and improve the reliability of (and trust in) the evidence we produce. Peer review should be one of the key mechanisms that helps us achieve this.
As science advances, academic communities around certain niche topics become narrower. How can peer review among narrow communities of academics result in echo chambers?
From the above perspective, a key problem with narrow niche academic research areas is that they risk creating deeper and narrower silos for us to work in. These silos themselves can take on the characteristics of echo chambers where academics working within them do not consciously identify or consider taken-for-granted assumptions shared by academics within each silo. Taken-for-granted assumptions can fade into the background, with new generations of academics not even considering challenging these assumptions or thinking of alternatives. This can be considered a form of confirmation bias hindering scientific progress.
I regard peer review quite broadly, encompassing reviews of work by peers at all stages of a project including (but not limited to) once a study has been submitted for publication. Ideally, all stage of peer review should help identify biases affecting the reliability of academic evidence. However, it is unlikely to identify confirmation bias present within a narrow niche subfield if the peer review in that subfield is all undertaken by academics within that subfield. This is because these academics are likely to share, and not question, the same taken-for-granted assumptions, such that these assumptions remain unchallenged and become ever more deeply embedded as an academic field’s literature develops. In this way, I believe that there is a very real risk of self-referential peer review echo chambers developing in many narrow academic subfields.
Where new generations of academics in a subfield are socialized into the accepted ways of doing research in that subfield without explicitly identifying or problematizing these taken-for-granted assumptions, these peer review echo chambers risk deepening and become more impermeable over time, entrenching confirmation bias.
While I have discussed these problems in terms of academic subfields, there can also be a range of taken-for-granted assumptions within broader academic fields. As a larger number of academics are likely to share these taken-for-granted assumptions at the broader field level, peer review might be even less likely to challenge the confirmation biases from these generally accepted assumptions.
Can you give us an example?
I imagine there are several specific examples in most academic fields. At a very broad level, there is a tendency across many fields for academics to conclude that statistically significant results provide robust insights irrespective of how socially or economically substantive the results of a study are. Stephen Ziliak and Deirdre McCloskey (see, for example: Ziliak, S. T. & McCloskey, D. (2008) “The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice, and Lives”, Ann Arbor: University of Michigan Press) have long argued that this is highly problematic, and have clearly demonstrated that trivial impacts are often statistically significant whereas substantive impacts are often statistically insignificant. Academic fields that seem to be impervious to these major critiques of the real significance (or insignificance) of statistically significant results appear to be exhibiting confirmation bias, by unquestioningly taking statistical significance as a key determining factor in claiming importance for insights from their studies and in the body of knowledge they produce.
There is a lot of discussion about waste in research and reproducibility crises. How is it that the peer review system is failing to recognize or challenge potentially major weaknesses in a research topic?
Drawing on the above broad example, research studies conflating statistical significance with importance continue to be published in leading journals in many academic fields. This indicates that the peer review system has not been able to identify or challenge what appears to be a problematic taken-for-granted assumption across these fields.
A key problem, as I see it, is that where research studies are only exposed to informal and formal peer review from academics who are likely to share the same taken-for-granted assumptions, and therefore similar confirmation biases, confirmation bias is unlikely to be identified. We should have nothing to fear from active identification and challenge to our most fundamental of assumptions if these assumptions are able to stand scrutiny. However, for the long-term benefit of the quality of our research, our knowledge base, and the reputation of academic researchers, we need peer review processes that will help identify where assumptions that might have been embedded unquestioningly over several generations of researchers are unable to stand scrutiny.
How can journal editors, peer reviewers and authors confront peer review echo chambers?
What I hope my paper does is spark a debate about how to confront peer review echo chambers. I am seeking to raise awareness of this problem as something that requires deep reflection and debate among the academic community to bring forward a range of possible solutions.
As it is really difficult to confront biases of which we are not aware, we need to put ourselves in positions where we expose our research to informal peer review from a broad range of academics, not just those in our own silos. As confirmation bias is likely to lead us to unconsciously giving greater weight to informal peer review feedback that supports our views and arguments than we give to feedback that challenges us, perhaps one of the measures we can adopt is to seek to counterbalance this unconscious bias by consciously favoring and acting on negative feedback from academics in cognate subfields but outside our own silos.
Confirmation bias impacts other evaluation processes such as hiring and promotion. How can funders and policy-makers confront peer review echo chambers?
This is a really difficult issue. Challenging confirmation bias is likely to require emerging scholars to identify and challenge key assumptions early in their careers before they become deeply ingrained. This is likely to then require arguing against senior academics who are gatekeepers in terms of selection panels for jobs and funding – a very brave thing to do in an academic world where securing jobs and funding is highly competitive. Perhaps a partial answer to this is for selection panels to have as one of their judgment criteria the identification and constructive challenge of taken-for-granted assumptions for the benefit of the academic subfield.
As you know the theme of this year’s peer review week is Trust in Peer Review. In light of your paper, how can we improve/build trust in peer review among citizens, the academic community, scientific publishers, and academic institutions?
A key overall message from my paper is that failure to identify and then challenge the deeply ingrained, taken-for-granted, assumptions that are likely to exist within all fields and subfields of research risks compromising the quality of our evidence base.This can lead to a loss of trust in our evidence, which would also represent a loss of trust in the outcomes of the peer review process.
I hope that raising awareness of confirmation bias will lead to academic communities developing ways of actively questioning the possibility of self-referential peer review echo chambers having developed in their own fields. Such awareness-raising is likely to be an important step in shifting biases from unconscious confirmation bias to more conscious selectivity bias — which can be more readily challenged through peer review. An important part of building and maintaining trust in peer review is ensuring it operates in a way that actively challenges fundamental taken-for-granted assumptions, thereby reducing the risks of echo chambers developing in academic fields and subfields.
Thank you so much for the time and information you provided us here.
*The British Accounting Review is published by Bahar’s employer, Elsevier.
1 Thought on "Guest Post — Risks from Self-Referential Peer Review: An Interview with Jeffrey Unerman"
COVID deaths are the ultimate “statistically insignificant but important phenomenon” misused by non-academicians.