Ask a researcher what matters most to them in their work, and effective peer review will always be among their top three. At the recent STM conference in Philadelphia, Judy Verses, Senior Vice President, Research at Wiley, clearly articulated a vision for publishers. She focused on the need for publishers to recognize who we are in business to serve. She exhorted us to consider the researcher as our “North Star”, and that everything we do be directed to serve their needs. Publishers recognize that peer review is a paramount concern for researchers, and yet have not addressed some of the key concerns that authors and reviewers face. In this post, I suggest that publishers need to do more for researchers to help authors, and to help reviewers understand their role as a reviewer and be recognized for their work. We need to focus on our “North Star”.

stars

Perhaps a good place to start is to ask the question, why is peer review so important to a researcher? Peer review is a key mechanism for ensuring rigor and establishing community standards on quality. Peer review in itself does not merely exist to filter good papers from bad. Peer review is a valuable service researchers provide on behalf of other researchers that allows for possible improvement of a paper. Peer review that leads to rejection is still valuable for the insights provided to an author. The ecosystem of review, in other words, allows the community to discuss research, and depending on the model (something we will get into later), allows for opinions to be shared without judgement or fear of retribution. A reviewer who participates in peer review gains mightily from this ecosystem, participating in the evolution of the research itself, playing role in encouraging the career path of their colleagues, and stimulating their own development in their field.

A reviewer will tell you though that their role can sometimes be quite confusing, and — depending on the expectations put upon a reviewer and how their work is treated — may decide not to participate in the review process. Publishers can do better here. Journals vary in their expectations for a reviewer. A journal that is perceived to be of high quality will likely expect its reviewers to take a much tougher approach than a journal that is looking for good research, but is not as selective. Publishers and Journal editors would do well to think about how to guide their reviewers. A reviewer may well take an entirely different approach depending on the level of review required. What would be wonderful is if a journal articulated expectations to reviewers, perhaps even providing a series of parameters and types of question a reviewer could tackle when acting as a reviewer for their journal.

A reviewer is often a silent, anonymous actor in a journal’s ecosystem, so — apart from an internal sense of growth for a reviewer — it is important for publishers to find ways to overtly involve and reward their reviewers. Some publishers do this of course, providing lists of top reviewers, or badges that a researcher can apply to their emails identifying them as an active reviewer for a publisher, or journal. Scholarly societies also have a role to play here. Part of society life is the interaction with others in your field – a chance to participate in a wider community and encourage the career development of those around you. This societal aspect of peer review has largely been ignored by scholarly societies, even in a time of aging and declining memberships. Perhaps this should be a key area of focus for scholarly societies looking to engage their communities as active members and contributors to the field.

The elephant in the room when considering peer review is the issue of implicit bias. It is quite clear that there is bias ingrained in the system. I doubt anyone could argue that the issue of bias is not real.

A recent study highlights how deeply ingrained gender bias is in peer review:  The American Geophysical Union (AGU) publishes around 20 journals and 600 papers per year, with a membership of 60,000. Their study (by Lerback and Hanson, Nature 541, 455-457, 26th January 2017) used self-reported age and gender data from 2013 to 2015, as well as data associated with anyone who had an AGU account since 2011. AGU combined this data with publishing data to produce age and gender data for 25,000 authors. 27% of published first authors were women, and yet only 20% of reviewers were women.

An author in an emerging nation may have a harder time getting their work accepted. An author from an R1 institution may benefit from assumptions of quality over someone from a less well-funded institution. We look at the gender of an author, and know that in an era where editorial boards are still mainly male dominated, there will be implicit bias at work.

There are a number of problems to solve here, boiling down to issues of trust in the ecosystem. If an author trusts the peer review model, then that author will be well served, and indeed likely to act as a good reviewer when the time comes. If a reviewer understands there is equity in the system and expectations are set accordingly, the quality of reviews will more likely be consistently high.

Much rides on the model of peer review being used – and there are quite a few. Broadly speaking, the humanities and social sciences use double blind peer review, and the sciences use single blind peer review. Then there are a plethora of attempts to open the review process to varying degrees.

At the American Mathematical Society (AMS), for example, we use single blind peer review and have done so for as long as I can find records. Single blind review means that the reviewer’s identity is anonymous, but the author’s name and affiliation is transparent. It is probably about time we have a discussion about implicit bias in peer review in the single blinded model. We have not yet taken this on within mathematics, but I imagine there will be a range of opinions. On the one hand, there are many who say that the system is fine, and that in a field such as ours, you know who the authors are, not just from the approach taken, but from the iterative reference lists that define the progress of a mathematical field. One can add to this argument, noting that much content published in a math journal exists already in a preliminary form from the author, on the arXiv preprint server. On the other hand, bias is at work, and so to ignore the problem is clearly not a good solution.

Double blind peer review is standard practice in the social sciences. Double blind review means that the reviewer does not know the identity of the author, and the author does not know the indent of the reviewer. From the point of view of implicit bias, double blind peer review appears to have more to offer the research community than single blind, even providing for the sort of concerns I articulate when considering a field like mathematics. The discussion at least need to be had rather than dismissed out of hand.

One society publishing organization that is embracing a discussion of double blind peer review is the Institute of Physics Publishing (IOPP) in the UK. IOPP launched their double blind pilot in January 2017, running through December of 2017.They offered an option of double blinded peer review to authors of two journals, Materials Research Express, and Biomedical Physics & Engineering Express. Results from this study were highly positive. IOPP reports an author uptake of the double blind option of around 20% on each journal, with highly positive feedback from both authors and reviewers, with comments indicating that double blind review was seen as being fairer than single blind review. Simon Harris, Managing Editor at IOPP says,

We have seen significant uptake from authors of the double-blind option in our pilot, and received very positive feedback from authors too, most of whom saw it as fairer than single-blind review. We believe that the results of the pilot prove there is a demand to be met for double-blind review in these research communities. The double-blind offering has also worked well from an operational point of view.

It appears that this option will now form a permanent offering for these two journals, with the option being extended to other selected journals in the IOPP portfolio.

I recently sat around a dinner table discussing peer review with colleagues from a range of publishers – both society, corporate and independent non-profit. I was surprised at the pushback I received when I suggested we should consider double blind peer review. Part of this push back is to be understood in terms of the wider publishing landscape of openness. My colleagues were not advocating for open peer review, but rather for a model that combines elements of the blinded review with openness – termed transparent review. Essentially, a review from an anonymous reviewer is published alongside the paper as part of the journal published record, sometimes along with author responses, and the decision letter. Two examples come to mind of journals currently deploying transparent peer review: the EMBO Journal and Nature Communications. My Scholarly Kitchen Chef colleague Alice Meadows posted an excellent article last year on this topic entitled, “What does transparent peer review mean, and why is it important?” One of the arguments I like about transparent peer review is that by publishing anonymous reviews alongside the articles themselves, it is clear whether or not the journal in question is a reputable, or predatory. The sheer number of predatory open journals, and the spamming of the research community is one of the real problems researchers face across all disciplines.

Transparent peer review is not synonymous with open peer review. Open peer review means the identity of the author and reviewer are publicly revealed, and more often than not published alongside the article openly. Advocates of increased openness in research gravitate towards this model, and indeed open access journal such as Springer Nature’s BMC journals have been using this model across their portfolio for some time. Open peer review has its problems. Some researchers will tell you that if they are asked to do a review anonymously, they will be much more likely to participate in the review process, not fearing retribution, and being able to express opinion freely – an important part of the review process. Some may say that a reviewer should take the same approach, whether anonymous, or open, but this ignores a psychological reality, that anonymity is important for allowing free expression without fear of judgement. This is especially likely to affect early career researchers who may well just decline to review.

A path some are taking is to combine elements of peer review models. The Royal Society, for example, encourages reviewers and authors to identify themselves, but will abide by the wishes of authors and reviewers and will keep both anonymous as they publish both the article, and review in CC-BY form.

I am not offering answers. I do think that publishers need to engage in a discussion about peer review with their “North Star” — the author – who is often, at the same time, a reviewer. We need to understand what service we as publishers and societies may provide to researchers to ensure that the role of reviewer is lauded, encouraged and respected as perhaps the most valuable part of the publishing ecosystem.

Editor’s Note: This post originally included an inappropriate image, which has been removed and replaced. This was a failure in editorial diligence, and we apologize to our readers. Thanks go out to those readers that brought it to our attention, we pledge to do better in the future.

Robert Harington

Robert Harington

Robert Harington is Chief Publishing Officer at the American Mathematical Society (AMS). Robert has the overall responsibility for publishing at the AMS, including books, journals and electronic products.

Discussion

35 Thoughts on "Peer Review – Authors and Reviewers – our “North Star”"

Excellent case studies. But your cover photo is irrelevant and “cheapens” this article about an important subject

Sometimes when seeking an illustration, one takes “artistic license” and looks for something symbolic or representative of the subject matter, rather than a literal depiction. Here, the magnifying glass is meant to get across the idea of taking a close look at the material. Note that yesterday’s post, about Springer Nature’s IPO didn’t feature an actual photograph of investors buying stock, but instead a painting that the author felt was relevant to the concept (https://scholarlykitchen.sspnet.org/2018/05/15/springer-nature-ipo-withdrawn/).

I know it’s trendy on Twitter these days to make fun of stock photos, but if you can think of better images that represent the concept of peer review, we would appreciate your thoughts. Feel free to look through the stock photo service we use to spot something better: http://istockphoto.com (note: over the last 10 years we’ve used up pretty much every openly licensed image one can find on a Google Image search for the term as well).

This kind of image seems better to me: http://blogs.plos.org/onscienceblogs/2017/09/09/pubmed-and-its-post-publication-peer-review-tool-pubmed-commons/

I don’t think the Springer IPO image – which is thought provoking – is at all comparable.

I think the “banter” about this image on Twitter, which I have seen but not taken part in, potentially masks a slightly more serious issue. Often we use humour as an immediate reaction to defuse something that we don’t like. I have worked in peer review as a young woman, but I certainly didn’t pout like this while I was doing it!

Okay, thanks, that’s clearer. That is an excellent image, but as noted in the previous comment, we used it years ago for a post and try not to repeat ourselves.

The issue then is not the concept, but instead the expression on the model’s face? I would less describe it as a “pout” than as a skeptical raised eyebrow, but the interpretation of art is up to the individual. The banter on Twitter that I have seen seems to be finding humor in the image, while the symbolism makes sense, the literal concept of looking at a computer screen with a magnifying glass is indeed funny.

In one of our earlier posts on bias, a commenter suggested that we make an effort in our illustrations to show a variety of people in different roles, rather than just white males, as that would help normalize inclusivity. Here I deliberately chose a female image to try to live up to that idea. Would the same image of a white male model have been as controversial?

Also note that we have limited funds and time for sourcing images. We want to comply with copyright laws, so only select openly licensed images, or those from the stock photo service to which we subscribe. Unfortunately, the former are limited, and the latter tend to use models in their photographs. Again, if you have better suggestions for sourcing illustrations, we would appreciate them.

I think you’d struggle to find a stock image of a “man in a professional job” in which you can see his underwear

Thank you for switching the image

Thanks, and apologies again. Working on a small screen, I hadn’t seen the flimsiness of the blouse. A diligence failure on my part. Now if we could only get the Twitter cache to clear so it stops re-using the original image!

North Star or Sexy Reviewer? It was hard to be distracted by the image chosen today: a young woman in a see-through blouse exposing her lingerie. Perhaps if the author had stuck to his North Star metaphor, there wouldn’t be such an outcry. Implicit bias in peer review extends to bloggers and their editor.

Thanks Phil. To be honest, I hadn’t noticed the blouse until you pointed it out (working too fast on a phone sized screen). This is a failure in diligence on my part for which I apologize. Swapped out now for a different image, thanks for the north star suggestion.

FWIW I didn’t even notice it on any of my screens and couldn’t figure out why people were upset over a magnifying glass. It was only after some comments when I looked very closely & saw what people were describing. I practically needed a magnifying glass.

I am a woman in a senior job in publishing. I did notice it right away and I have to say I was disappointed to see it. I expect women are more attuned to noticing casual sexism all around them than men are. I also notice that an image change only came when a man said something (thank you Phil!) but perhaps that is a coincidence

I am glad you swapped the image, that was the right decision

Shirley,

Hopefully my failure to notice was due to failing eyesight and not casual sexism. I’m not being facetious at all. I appreciate your pointing this out.

Adam

Comments on the inappropriateness of the image came in almost as soon as the post went live. My comment came in more than two hours after the first was posted. I also work on a 27″ retina screen. I honestly don’t know how anyone reads from a phone, let alone write or edit a blog with one.

To be fair, the original comments didn’t specify the offending issue and concentrated on the model’s “pout” and on Twitter, the use of a magnifying glass to look at a computer screen. Once the issue of the clothing was brought to our attention, we took swift action.

As for the device used for writing and editing, unfortunately many of us travel extensively and/or commute, necessitating a less than ideal user experience for the voluntary time we put toward this blog. I can’t promise that I won’t continue to use the quite good WordPress App going forward, but a lesson has been learned about casual inattentiveness and the harm that can cause others. I do sincerely apologize for this mistake and promise to do better going forward.

We apologize for the lack of thought on including the stock image here. It is a mistake. We will replace it. This is certainly not what I wanted to achieve with this article. Again, apologies.

GIRAFFE IMAGES There are options other than stock photographic services. In 1993 I wrote an article on peer-review entitled “On giraffes and peer review” (FASEBJ), which was peppered with pictures of giraffes peering over walls. Like giraffes, the peer-review system has evolved and, as I argued in the article, has gone seriously off track. We all have a nerve that travels from our brains to our vocal chords. Unfortunately, the path of a major blood vessel in the chest was set evolutionarily before the path of the nerve. So in you and me the nerve travels down to the chest and then loops back to the larynx. No big deal. But in the case of a giraffe … ?

Apropos the “issue of implicit bias,” it is worth mentioning that, whatever form it takes in STEM fields (the focus of this article), peer review in the humanities is a sea of subjectivity, typically a cesspool of grudges, envy, jealousy, and bad attitudes generally, even when the identities of the reviewers are known.

This is also true of peer review after the fact, such as book reviews.

Few reading this can imagine how much I would like to provide names and examples, accompanied by my own observations about the reviews and the motives of specific reviewers…

Rebecca Schuman wrote most eloquently about the topic on Slate in July 2014 (“Revise and Resumbit!”) when she compared it to “your meanest high school mean girl at her most gleefully, undermining vicious.” I find little in her comments with which to disagree.

Mr Harington’s field may suffer less from this kind of thing than fields like literature and history where, I can assure you, it is the norm.

I do not doubt your own personal experience, but that Schuman Slate piece has always rubbed me the wrong way. IMO it reads like it was written by someone who does not actually have any experience with academic peer review who is just trying to be “provocative.”

Fair enough. You certainly are not alone in your reaction to that piece, but let’s keep in mind that Schuman’s observations, like mine, are discipline-specific.

My point is that in the humanities many reviews of academic monographs reflect highly subjective biases. The peer review of papers submitted to scientific journals probably does not elicit the same degree of vitriol. Competitive though some fields are, it’s difficult to imagine the review of a monograph dealing with physics or engineering being as “emotional” or “personal” as something concerning history or women’s studies.

My comments are not based so much on overzealous criticisms of my own publications as those of other scholars.

“27% of published first authors were women, and yet only 20% of reviewers were women.”

Those kind of statistics should only be used with extreme caution. At least in the hard sciences, more often than not the first authors are the graduate students and it is perfectly reasonable to look to more established researchers to provide reviews on papers submitted for publication.

Not everything is a matter of bias. Sometimes it’s just the reality of the author/reviewer pool.

hi Paul–thanks for the comment. our study controlled for this because we looked at both gender and age of the reviewer/author/member pool. The difference held across the full age distribution for men and women (which are also different) and w/in each age cohort. So we could rule out the argument that editors were simply selecting more experienced reviewers, who tend to be male.

To reset the discussion, I offer a link to the quick video interview I did with Judy Verses, after her stellar keynote at the STM Annual Meeting: https://www.youtube.com/watch?v=22Fkca_viuk (3:20). Definitely worth 3-4 minutes of your time. Her keynote at STM offered some solid messages… researcher mentorship is key, listen to researcher needs, and accelerate the decision-making processes within publishing.

One unmentioned benefit of double blind peer review is that it functions to reduce bias, even if one can guess (or knows) the identity of the research group behind the work. Studies have shown bias against papers with female first authors for example. Even if the research group is known, the individual authors are still shielded from any effects that may unfortunately result from their gender or ethnic names.

Robert:
Great discussion. It seems to me that double blind removes most of the objections to the reviewing process while allowing for honesty and neutralization of bias. Nevertheless, we are talking about a rather small population in any discipline and because of writing style, topic or argument, etc probably most know who is the author and in many cases the reviewer.

Our journal, Geophysics, began using double-blind review on 1 January. Our editors discussed giving authors an option but decided that those who would benefit from their identities being known likely would choose single-blind review. They wanted to avoid both positive and negative bias.

Reviewers are researchers, too. The pub/perish and grant demands are based on published results and not “badges”for sacrificing for the good of the “team”. As Kent points out, I like to cite Maxwell who realized that publishers’ profitability, like universities with student athletics, remains high while patriotism is well.

As was pointed out, in the humanities and social “sciences” such as economics, philosophical bias runs deep, hence the splitting of journals. It also exists in the sciences.

In the early days, researchers used to share procedures down to serial numbers on chemistry jars. Today, no reviewer can begin to verify as we know from the numbers of non-reproducible results.

I would like to see the data, across the spectrum of subjects and rankings that there is not a high concern regarding the time consumed in reviewing. The time to review new work, especially from developing countries and non-native english speakers can multiply by several factors, an honest reviewer’s time and talent.

Stock photos as ways to “enhance” an article? so much persiflage for a serious subject. “North Star” my eye!

“The time to review new work, especially from developing countries….” Tom can you elaborate on the point you are making here.

Although uptake for double-blind peer review may have been high in the IOPP study, did it make any difference at all in acceptance/rejection rates for female authors or for authors in India or other countries? Uptake alone does not really provide any useful information.

The main aim of the IOPP pilot was to assess author demand for, and perception of, double-blind review. The study wasn’t broken down by author gender. We found that submission rates from India, Africa and the Middle East were higher for the double-blind option than for single blind, so more demand for double-blind review from researchers in these regions.

There was a large study from Nature-group authors with findings directly on point for this. When given a option for double-blind review, only 1 in 8 opted for it. There were no gender differences, but authors from India or China opted for double-blind far more often than did authors from Europe or the USA Authors opting for double-blind review had much lower success rates at getting published (8% vs 23% for single-blind). (Few authors choose anonymous peer review, massive study of Nature journals shows.)

In my view, allowing authors to opt for whichever peer review model they think will work best for them poisons the comparison. Whichever model a journal uses, single-, double-blind, double-blurred, or open, it needs to be the same for all or it is inherently unfair.

A few thoughts on your nice overview of issues, Robert.

A couple of things I’ve found missing from the blind or sighted peer review debates are conflicts of interest and the role of editors. I have only reviewed one double-blind paper in the last several years, but I recall that in addition to the identity of the institution and authors being obscured, so were the acknowledgements and any indication of who paid for the study. If there is a convergence of study findings with the self-interests of the authors or the the sponsors, that probably should raise some open-mined skepticism from readers. If conflicts are important to disclose to readers, why should they be obscured from the first readers (the reviewers)?

Also, the double-blind issue focuses on reviewers to the exclusion of editors. Reviewers advise, but editors decide. It’s easy enough for an editor to invite allies or assassins to review.  Editors are often “peer editors”, that is, practicing scientists and editors presumably are just as susceptible to implicit bias a reviewers. Hilda Bastian’s “The Fractured Logic of Blinded Peer Review in Journals” is worth reading.

A piece I just wrote on pro’s and con’s of double-blind reviewing might be of interest, where I mulled over the practical difficulties of blinding and discuss a vision-impaired, but not completely blind compromise. It does, I admit, include an image of a reviewer wearing a skimpy blindfold.

You raise a couple of interesting points Chris. In the IOPP double-blind study the authors could include an Acknowledgments section, with funding details, in the manuscript sent out for review, as long as it didn’t reveal the name of the authors. And the Editors selecting reviewers are professional staff Editors rather than practising scientists, and so we hope less susceptible to implicit bias.

“And the Editors selecting reviewers are professional staff Editors rather than practising scientists”

So accept/reject/revise decisions are made by staff editors following recommendations from reviewers? Deciding whether the reconciliation of responses/revisions/and rebuttals are appropriate? The staff editor should be less affected by bias than the peer editor, but may not have the expertise to mediate reviewer/author disagreements.

It would be interesting to see the variety of approaches taken here. At my home society, the staff editors have almost no role in the peer review process. Once assigned, the subject matter editors handle manuscripts through to the acceptance/rejection stage, when accepted manuscripts are handed over to professional staff editors. During the peer review process, the role of the staff editors is to check up on late editors and help editors and authors with the blessed management software.

The peer vs staff editor model would be a good SK topic if it hasn’t been done.

The expertise of staff depends on the quality of staff you recruit, training provided and quality of the processes implemented. I work for a society publisher that employs qualified scientists to work within the community they serve and often have a better idea of the needs of the community and the relationships within that community to not only very effectively assign relevant reviewers – with no axe to grind or need to be wary of who might be supplying the next research grant but are also capable of giving lectures on the subject and associated areas around the world. Staff Editors are not just administrators, don’t write us off in such a way.

Yes, on most IOPP journals accept/reject/revise decisions are made by staff editors following recommendations from reviewers. Our editors are highly trained, all having science degrees (and in some cases PhDs), and they can call on the Editorial Board for papers where they could use some further help (e.g. where there are reviewer/author disagreements).

Comments are closed.