Earlier this year, the American Geophysical Union (AGU) published a study of gender bias — at all ages — in peer review. It concluded that:

…women of all ages have fewer opportunities to take part in peer review…women were used less as reviewers than expected…The bias is a result of authors and editors, especially male ones, suggesting women as reviewers less often, and a slightly higher decline rate among women in each age group when asked.

Given what we know about bias in other areas of scholarship and scholarly communications, this is probably not surprising. Encouragingly, however, the AGU’s initial efforts to address the issue are already proving successful. Brooks Hanson (Senior Vice President, Publications) and Jory Lerback (former Data Analyst at AGU, now a graduate student at the University of Utah, Salt Lake City) provided an update on their study at the Eighth International Congress on Peer Review and Scientific Publication in Chicago earlier this year, which they summarize in this interview.

different people meeting on different levels

Can you tell us about your study on gender bias: why and how you carried it out and what you discovered?

This study originated based on a visit from Marcia McNutt, who was organizing a conference at AAAS on bias. She suggested that, given the large size of AGU’s publications (20 journals and about 6000 published papers per year), we might be able to generate interesting data to evaluate bias. Marcia was a previous President of AGU and led the conversion of AGU to online publishing in the late 1990s, so she knew about our operations and the large scale of AGU’s publications and membership, which provided a large statistical sample. A major challenge in looking at bias in publishing is that relevant data such as gender, age, and ethnicity are usually not collected in editorial systems. As a society publisher, however, we were able to combine our membership data, where this information is reliably self-reported by most of AGU’s 60,000 members, with editorial data. Age data are particularly important because in the geosciences women have been historically underrepresented, and are only now increasing in numbers (AGU has nearly 50% women members under the age of 30 but less than 10% over the age of 70). Thus without age data it’s hard to identify other biases in editorial operations.

In the geosciences, the first author is also usually the lead author on a study and the corresponding author. Overall, we found that papers with women as a first author were accepted at a higher rate than with male first authors, and this extended across most ages and sizes of author groups. This was a bit surprising since other studies have not found a significant difference. We also found that within each age cohort women were less likely to be used as reviewers, compared to what one would expect based on the membership distribution or author profile (usually successful authors are a good reviewer base). Because reviewing is important in a career in a variety of ways, for example, in fostering networking, this raised concerns.

Although not fully documented yet, we presented initial results at the Eighth International Congress on Peer Review and Scientific Publication around interesting differences in author networks with age, and we think that these partly explain the bias we see in reviewer selection. Specifically, it seems that women authors have a distribution of co-authors that one would expect from the age-distribution of our members (that is, they are interacting with the expected membership distribution without bias); whereas men tend to have predominantly other men as co-authors. This is worrisome because, in many ways, selecting or recommending reviewers is a reflection of a scientists’ (author or editor) network. Thus this bias may be still baked in through existing or emerging collaboration networks, even though more young women are becoming geoscientists.

In addition, this pattern of different gender distribution is reflected in who women authors and editors suggest as reviewers for papers. Female authors and editors recommend reviewers by age and gender that reflect the pool of published authors, whereas male authors and editors have male-dominated reviewer suggestions. A similar recent study by Helmer et al., in eLife found similar effects, which they suggested showed homophily. Our data suggest that this may be the case at AGU for male authors, whereas women author networks are “balanced” rather than homophilic.

Since then you’ve started to address this issue within AGU – what changes to your policies and/or process have you made so far? 

After we realized the bias in peer review, we conducted an experiment with our largest journal, Geophysical Research Letters, to see if we could correct it. The bias begins with suggestions from authors. Typically these suggestions are used about 30% of the time and editors suggest the other reviewers. Our editors do suggest more women as potential reviewers than the authors, and women editors do better than male editors, perhaps reflecting the networking effect above, but still below what we would expect. So, as an experiment, we added a specific statement to the author instructions reminding authors about our results and asking that they consider diversity in their recommendations: Specifically we noted that:

Evaluation of our journals’ peer review practices suggests that women were less likely than men to be asked to review. Please help us improve the diversity of our reviewer pool by including women, young scientists, and members of other underrepresented groups in your suggested reviewers (e.g., age, ethnic, and international diversity).

Note that this statement reminds authors to think about not just gender diversity, which we have specific data about, but also other types of diversity that includes other historically underrepresented groups in STEM.

We compared suggestions for a period of three months before and after this statement was added, and suggestions of women as reviewers increased significantly. Male authors in particular increased their suggestions by a significant 4% (X² = 738.4). We have thus now included this statement on all our journals and also in the step where editors recommend reviewers — see this post.

Our reviewer pool is also dominated by scientists from the U.S. and Europe, even though about 30% of accepted submissions are from Asia. We have not seen any significant increase yet in recommendations of reviewers from other countries and are still looking to expand their participation.

Are you planning to do any further research into this topic – for example to look at other types of bias (ethnic, geographical, etc) or at bias in other aspects of AGU’s work?

AGU does collect some ethnic data from members (self reported) but these data are not very complete and categories are limited. Thus we doubt that we can see significant effects even though we suspect that similar biases are present. We are continuing to monitor this in AGU Publications and are also looking at biases in our annual meeting (which is one of the largest scientific meetings) — in terms of invited speakers, poster vs. oral presentation selections, and more. We will look to modify our data collection going forward to accurately represent our membership population. Critical also is active participation from our members and the scientific community in providing the necessary demographic data to carry this forward.

Do you think the bias you’ve uncovered at AGU exists in the peer review and other processes at other associations, journals, and publishers? 

We would be very surprised if these biases are not present throughout most of science. The correlation between co-authorship and reviewer recommendations also suggests to us that bias might be measurable in other more subtle and opaque aspects of academia such as research group formation.

Do you have any plans to collaborate with other organizations to help reduce/eliminate bias in future? 

We haven’t had these conversations yet, but we do think this is a chance for society publishers to work together on this topic, not just within the Earth Sciences but across all the sciences. For many small society publishers, the data are not extensive enough often (if they are managing one or just a few journals) but by collaborating, a larger view of science workforce is possible.

We would be happy to answer further questions or hear suggestions on how to carry this forward.

Brooks Hanson bhanson@agu.org

Jory Lerback jory.lerback@gmail.com

Alice Meadows

Alice Meadows

I am a Co-Founder of the MoreBrains Cooperative, a scholarly communications consultancy with a focus on open research and research infrastructure. I have many years experience of both scholarly publishing (including at Blackwell Publishing and Wiley) and research infrastructure (at ORCID and, most recently, NISO, where I was Director of Community Engagement). I’m actively involved in the information community, and served as SSP President in 2021-22. I was honored to receive the SSP Distinguished Service Award in 2018, the ALPSP Award for Contribution to Scholarly Publishing in 2016, and the ISMTE Recognition Award in 2013. I’m passionate about improving trust in scholarly communications, and about addressing inequities in our community (and beyond!). Note: The opinions expressed here are my own

Discussion

6 Thoughts on "Gender Bias in Peer Review: An Interview with Brooks Hanson and Jory Lerback"

This is fantastic, thank you so much for an example of an association that is addressing the gender bias inherent in scholarly systems!

Fascinating insights and, as Charlotte said, very useful example of how an association can use data to focus and improve its diversity efforts.

“Overall, we found that papers with women as a first author were accepted at a higher rate than with male first authors, and this extended across most ages and sizes of author groups.”

Do you have any plans to correct this other example of gender bias you uncovered in your study? Did you consider it? If not, why not? Are you yourself presenting homophily?

Given the small difference in acceptance rates (61% for female first authors versus 58% for male first authors in the study at http://www.peerreviewcongress.org/prc17-0308), it is not clear to me that this indicates a gender bias. I would consider the possibility that the acceptance rate is higher because female first authors might tend to submit a higher quality paper hoping to avoid rejection (sort of a “we try harder” effect).

Bill—thanks for the response. Yes, we’re not interpreting this difference as a gender bias per se and we favor the interpretation you suggest (trying harder on average)–we discussed this further in the Nature paper. The difference in acceptance is significant given the large sample size, and we see it across most age groups and author size groups also.

Comments are closed.