Around 600 attendees gathered in Chicago this week for the Eighth International Peer Review Congress (#PRC8). Held every four years, its focus is primarily on peer review in medical and scientific journals and grant applications. I was a first-time attendee (along with many others, as it turned out) and I’m very glad that I made the trip. Apart from having many really interesting and enjoyable conversations — with a mix of people I already knew, introductions set up by colleagues and friends who knew I was attending, and serendipitous encounters — I also learned a lot. Here are some of my key takeaways:
- This is a serious academic conference! Unlike the conferences I usually attend, the papers and posters presented at #PRC8 were rigorous, scholarly, and double-blind peer-reviewed. As one speaker put it, “we must hold ourselves to the same standards as the science we publish.” Speaking with both my presenter and my attendee hats on, this could have been — was, at times — more than a little scary. Some of the more scholarly/technical presentations, many of which focused on very specific topics in medical/scientific peer review, were difficult for a non-expert to follow in detail. However, the atmosphere was very welcoming, questions and comments, from the simple to the complex, were strongly encouraged, and dealt with courteously and positively (I saw no signs of defensiveness on anyone’s part). So even amateurs like me could understand the overall objective even if the details sometimes defeated us.
- These people are passionate about peer review, whether focusing on a very niche topic (Fact Checking Nucleotide Sequences in Life Science Publications: The Seek and Blastn Tool, for example!) or discussing some of the major issues in peer review (keynotes covered the role of preprints, issues of quality, how to evaluate peer review, for example). We all know that peer review (in some form) is essential to science, most of us worry that it doesn’t work as well as it should, and we certainly don’t all agree on what needs to be fixed or how to fix it. This was as true at #PRC8 as any other industry conference. But it was very encouraging to see such a large group of smart people from across the scholarly community — associations, funders, librarians, publishers, vendors, and of course researchers themselves — fully engaged in finding solutions to the challenges of peer review.
- This community practices what it preaches in terms of diversity. From the kickass opening #allfemalepanel (double blind selection process!) on “Bias Associated With Conflict of Interest and Peer Review” onward, many of the speakers, regardless of sex, gender, ethnicity, were early career researchers and professionals, not something I’ve seen to the same extent at other industry conferences. Unsurprisingly, there were also a number of presentations on issues of bias in peer review. I especially enjoyed Joao Martin’s presentation on “Geographic and Gender Bias in Peer Review of Applications Submitted to the Swiss National Science Foundation” (lessons learned: don’t allow author-suggested reviewers, and give preference to overseas reviewers); and Jory Lerback’s update on the American Geophysical Union’s (AGU) study of “Gender and Age Bias in Peer Review in Earth and Space Science Journals” (co-authored with Brooks Hanson). Very encouragingly, one of their key findings was that, simply by reminding editors of Geophysical Research Letters about the need for diversity, they saw an increase in the number of women reviewers in just three months. A great, and easy to adopt, lesson for us all!
- There’s some really creative thinking and activities happening in peer review. One of my favorite discoveries during the Congress was that some medical organizations are starting to use patients (and/or their caregivers and advocates) as reviewers. This enables a whole new flavor of diversity and, from the two speakers I heard, it’s proving quite successful (if only in a fairly small-scale way at present). A study presented by Laura Forsythe from the funder PCORI (Patient-Centered Outcomes Research Institute) analyzed her organization’s use of patients and other stakeholders (such as insurers) to supplement their scientific peer review process. It found that, while technical merit was critical, the feedback provided by patients and other stakeholders also played a role in successful grant application. Another speaker, Sara Schroter, who runs BMJ’s research program, reported on a mixed methods study to evaluate the feasibility of incorporating patient reviews into their peer review process. Among her findings were that all seven editors in the pilot agreed that patient reviewers occasionally (six editors) or frequently (one) include insights not raised by other reviewers. Perhaps just as importantly, the patients themselves overwhelmingly think more journals should adopt patient review (88%), would recommend others to review (82%), and didn’t have any concerns about open review (80%).
- Ah yes, open review… I think it’s fair to say that there was as little agreement about this at the Congress as elsewhere (though as mentioned, the discussions were much more civilized than they can sometimes be!). At one end of the spectrum, I heard at least one mention of the potential need for a triple blind process (author, reviewer, editor all anonymous). At the other, we haven’t even decided what we actually mean by open peer review — published and/or signed reports, pre- and/or post-publication, invited and/or crowd-sourced? There is no shortage of variations! A number of studies were presented that are attempting to establish what works and what doesn’t. For example, PLOS allows their reviewers to choose whether to sign their reviews. Elizabeth Seiver reported on the results of a study of uptake of each option and whether this corresponded with reviewers’ stated attitudes. Perhaps unsurprisingly she found that, while almost half of authors prefer to receive signed reviews, only about 15% reported usually signing their own reviews. However, in a number of cases, this was either because they had never been asked or weren’t sure why they should (or shouldn’t) do so.
- Predatory publishing was on everyone’s minds. Unfortunately I missed what was evidently a great pre-Congress workshop on the topic, hosted by The Ottawa Hospital Centre for Journalology. But I did get to talk with two of the presenters, Kelly Cobey and Larissa Shamseer (who were also two of the authors of this recent and timely Nature paper on predatory journals) during the Congress itself. That paper shows clearly that predatory journals are not just an issue for scholars in the global south, as is often assumed. In fact, “Harvard University (with 9 articles) in Cambridge, Massachusetts, and the University of Texas (with 11 articles across all campuses) were among the eight institutions with the most articles [published in predatory journals].” Of course, much of the challenge lies in identifying which journals are genuinely predatory, as opposed to well-intentioned but low quality and/or ignorant of/unable to adhere to industry standards. This was a topic much discussed at the Congress, both formally and informally, with a general consensus that this is a community problem which will require a community effort to solve. The emphasis should be on distinguishing between legitimate journals of any quality and journals that set out to deceive their authors, their editors, and the community. Watch out for more discussion of this topic here on the Kitchen in the coming months!
- Don’t underestimate the role of the editor in journal peer review. This was another theme that cropped up throughout the meeting and in the post-Congress Peer Review Week panel, “Under the Microscope: Transparency in Peer Review”. Editors are, of course, every bit as important as their peer reviewers and yet they’re often overlooked in discussions about review. They’re ultimately responsible for the peer review process: ensuring editorial policy is clear and implemented; overseeing how reviewers are selected, instructed, and evaluated; identifying problems such as bias, and making improvements; reporting to their editorial and/or society boards on successes and failures; and more. One example provided of the influence that editors have was found in a presentation by Malcolm McLeod on the “Impact of a Change in Editorial Policy at Nature Publishing Group (NPG) on their Reporting of Biomedical Research”. The study found “a substantial improvement in the reporting of measures that might reduce the risk of bias in in vivo research in NPG journals following implementation of a mandatory checklist policy” introduced to try and reduce the risk of bias.
There was a very active twitter presence at the meeting so, if you weren’t able to attend (or even if you did), I strongly recommend taking a look at #PRC8. And I can’t end without a hat tip to the organizers for a really interesting and valuable event, and especially to Annette Flanagin of JAMA who truly has the patience of a saint, at least as far as my own personal experience of submitting, revising, and presenting a paper was concerned!
Note: I didn’t see every poster or attend anything like all the sessions at #PRC8 so I’d welcome feedback and comments, not just on my impressions of the meeting, but also on other interesting presentations and posters that I may have missed.