peer-review-weekAfter the successful its successful debut last year, Peer Review Week is back — and it’s next week!

In the spirit of participation, we’ve asked the Chefs: What is the future of peer review?

Kent Anderson: If you take mainstream scientists and editors seriously, you could reasonably believe that the future of peer review will manifest as incremental improvements on current and past peer review approaches. If you take critics of peer review seriously, you could reasonably believe that the future of peer review will be a radical revision of current and past peer review approaches. If you take the gloomy forecasts of peer review workload seriously, you could reasonably believe that peer review — either incrementally improved or radically revised — is doomed to fail in the mid-term. So, what’s the future of peer review? I think it will represent all three forces proportionally — a vast and resilient center based on current practices enhanced by incremental improvements; a small and loud set of critics driving introspection and some embroidery around the edges; and a continued search for qualified reviewers and for reward systems that keep reviewers engaged. 

Alice Meadows: It’s hard to imagine scholarly communications without some form of peer review. Getting feedback from one’s peers is and should always be integral to the research process. Whatever our line of work, we all benefit from constructive criticism and guidance, and this is even more the case in research, which typically employs an iterative approach. So first and foremost I strongly believe that peer review does have a future. 

But whether it will look the same as it does now is another matter. There has been a rapid expansion in approaches to peer review in the past decade or so — blind/double blind review is still popular, but open peer review has been enthusiastically adopted by some individuals and communities. Ditto portable peer review — whether done before or after manuscript submission. Prepub site arXiv is an example of a different approach to peer review for publications in physics and maths, and similar sites for biology (bioRxiv) and social sciences (SocArXiv) have launched recently. It’s encouraging to see peer review continuing to evolve in this way to meet the discipline-specific needs of reviewers, the organizations for which they are reviewing, and the wider scholarly community. 

However, although I believe that peer review in various forms (some yet to be developed) will continue to underpin scholarly communications, we do need to address — quickly — the challenge of finding and training the next generation of reviewers. This is especially important given that researchers aren’t just reviewing publications but also conference abstracts, promotion and tenure applications, funding applications, and more. Typically one organization doesn’t have any oversight of the work their researchers are doing for other organizations. So what may seem like a peer review challenge for one sector could become a genuine peer review crisis across all sectors if we aren’t careful. Acting soon to provide researchers with better training, support, and recognition for their peer review contributions — whatever form they take — is critical to averting such a crisis. 

Rick Anderson: At the risk of falling victim to the “Bears Always Sound Smarter” trap, I’m going to go out on a limb and say that the future of peer review is going to look very much like the past of peer review. It seems to me that the concept of peer review – putting multiple sets of expert eyes on a paper in order to maximize the likelihood that important flaws will get caught and corrected before it joins the public record of science – is just too fundamentally sound to be replaced by something that’s radically different.

That’s not to say that peer review always functions as it should, or that it has no downsides in addition to its obvious upsides, or that we won’t find new and better ways of managing it. And of course there have been some high-profile experiments with post-publication peer review, which have had success but about which I have some pretty serious concerns. Overall, though, my suspicion is that pre-publication peer review, as we’ve known it for decades and as we currently understand it, is here to stay.

David Smith: One can frame this question a number of ways; mechanistically what is the future? Philosophically, what is the future? Practically? Scholars care greatly about peer review as a principle component of the scientific method. And they understand its role and its limitations in this context:

“The scientist has a lot of experience with ignorance and doubt and uncertainty, and this experience is of very great importance, I think. When a scientist doesn’t know the answer to a problem, he is ignorant. When he has a hunch as to what the result is, he is uncertain. And when he is pretty darned sure of what the result is going to be, he is in some doubt. We have found it of paramount importance that in order to progress we must recognize the ignorance and leave room for doubt. Scientific knowledge is a body of statements of varying degrees of certainty — some most unsure, some nearly sure, none absolutely certain”

Richard Feynman

So much is written about the flaws and limitations of peer review, yet change does not come. I think the statement above explains why. Peer review is a satisfactory philosophy of consultation, no more, no less. The successful scholar understands that, and the implications thereof. Mechanistically, aside from some assistive tools, I think there will be little change. Philosophically, there will be no change. Practically? Well we seem to be heading into a post factual world… There lie the threats to peer review, because there lie the threats to the scientific method.

Alison Mudditt: Peer review varies dramatically across disciplines and content formats, so I’m going to focus my response primarily on long-form content in the humanities and social sciences. While Kathleen Fitzpatrick argued for a system of open, direct peer review on public texts in Planned Obsolescence some ten years ago (and practiced it with that book), few have followed her lead. Changing peer review requires a significant cultural shift for the humanities where authorship is still very much a lone activity — the open dialog that Fitzpatrick and others recommend requires a radical shift. And of course, the other challenge is whether new models would be taken seriously by the institutions that rely upon peer review for accreditation.

So I think it is pretty unlikely that we will see double-blind review unseated as the dominant methodology any time soon. (And it’s interesting to note that AAUP’s new handbook of Best Practices for Peer Review , while acknowledging disciplinary differences and perhaps a different process for digital projects, focuses almost exclusively on the double-blind model.) That said, the growing dissatisfaction with the shortfalls of this model (see Martin Eve, for example) are likely to lead with increasing experimentation with alternatives such as open commenting and metric-based post-review.

As the director of a university press, I’m also interested in the future of a critical but frequently overlooked piece of monograph peer review, and that’s the curatorial role played by acquisitions editors. Editors not only undertake substantive editorial work on manuscripts, but this cultivation continues through the peer review process as editors select scholars who can best help to shape the manuscript, connecting it to other scholarship and to the right audience. This work is highly valued by authors, yet it is also expensive and resists efforts to drive down the cost of monograph publishing (as the recent Ithaka report confirmed). How can it be preserved as monograph models evolve?

Judy Luther: Peer review is an essential part of the scholarly research dialog and as such is crucial to the ongoing cycle of research. Whether feedback is public or private, prepublication or post publication, free or paid, and the reviewers recognized or unknown, it will continue as a function. What’s not yet clear is how the form will change with the influence of a networked environment and technology affecting workflow as there are a growing number of new tools that enable collaborative creation and commenting by the reader, creating a continuous flow.

Peer review is core to the editorial process of both books and journals producing a better work that serves as part of the scholarly record. At a time when there is an increasing volume of research available globally and competition for the readers’ attention, the published article serves as a milestone and one that warrants accuracy and quality.

I did not always feel this way. Enamored with the potential for a more complete picture of the research cycle and recognizing that emerging components could be easily linked, I could envision a future where the peer review process was diminished or no longer needed. However there are only so many hours in a day for consuming content, and there is value for the reader to know that ‘this article’ is distinguished by having a thorough review as part of the production process. It is far from perfect with all the frailties that humans brings to a process, however, it cannot be easily replaced and we are better with it than without.

Angela Cochran: I am a big fan of peer review and the editorial process. It’s one of the reasons I made the career switch from journals production to journals editorial.

What peer review is and how it should be implemented is certainly in flux right now. I believe that we are moving toward a more transparent and inclusive process. That said, I’m not sure we are settled on the degrees of open and inclusive.

There are still very real fears from junior reviewers that some form of retaliation could come their way for critical reviews. There are also real fears from editors that they will lose reviewers and even associate editors if all names are disclosed. There are no known solutions to these problems.

Openness of review is going to need to be championed and established by each scholarly community. I worry that forcing the issue of open review (author names and reviewer names known and published) will further reduce participation and lead to even more insulated feedback loops than we already have.

So, I really want to talk about a more transparent and inclusive process. There can be transparency without naming names and reviewers should be given a choice to sign their reviews. Inclusion is, in my mind, more important. US and European journals need to do a better job of getting reviews from outside their territories. I suspect we collectively could use more peer review participation from practitioners. It is time for journals to review their databases and see who, exactly, their reviewers are and perhaps initiate that discussion with their editorial boards. Reviewers hold enormous power and I don’t feel like this gets analyzed enough.

There is very little value in throwing things online and hoping for the best. Peer reviewed journals aren’t exactly hurting for submissions and readership continues to grow. It seems clear to me that despite any flaws, this process of peer reviewing scholarly content is valuable to authors and readers.

Phill Jones: The interesting thing about this question is that it makes the (completely valid) assumption that peer review will change from its current form, and by extension that it needs to. The problem is that there doesn’t appear to be much of a consensus as to what, if anything, is wrong with it. Some say that it’s unfair and overly conservative, while others disagree and say that it should be conservative. Some say that reviewers aren’t incentivized adequately, but what kind of incentive would be sufficient to divert more of a researcher’s attention from doing their own research? It seems to me that reviewers are happy to do a certain amount of reviewing, in order to support their fields, but that there are limits, in terms of how much time scholars can take away from their primary obligations.

What we do know is that peer review is under stress. It seems to be getting more and more difficult for editors and editorial assistants to find reviewers and get those reviews back quickly. The exponential growth in the number of submissions, is putting an increasing load on a limited pool of experts. But if there are more researchers writing, then surely there should be more peers available to do the reviewing. So what’s gone wrong? I think that the issue lies in a lack of tools to identify new potential reviewers. I’m assuming here that workflows haven’t changed all that much since my brief stint as ‘support to an editorial assistant’ many years ago. Back then the database of reviewers was manually populated from people that editors knew or were familiar with. That sort of workflow is likely to lead to a short list of favorite reviewers who will inevitably, over time, feel put. So if you’re asking me, which you are, the next big change in peer review workflows will be better ways to identify potential reviewers.

Ann Michael: Based on all of the evolution and disruption examples I’ve ever seen or read, future paths are often discovered when you address the root purpose of a process, product, strategy, etc., evaluate the root purpose for current relevance or appropriateness, and rebuild from there. In thinking about the future of peer review, perhaps that core question is: Why do we peer review?

If we assume the purpose of peer review is fundamentally to ensure the accuracy or “reasonableness” of structure (of an experiment or exploration — NOT of a manuscript), methods, conclusions, and even underlying data (currently growing in importance), is that purpose relevant or appropriate? I would venture to say that it is. (If it isn’t then we can start another thread on why or why not that is the case!)

So, how can we best accomplish that purpose? Could we use something like IBM Watson or Brainspace to first “read” all of the research done in a particular area or areas (for interdisciplinary publications) and ask it to rate the research for accuracy, method, logical path/soundness of conclusions and itemize any potential issues or shortcomings? Maybe something like this wouldn’t be the only step in “peer review,” but it could be input into the process, that might diminish the demands on a peer reviewer while also ensuring that a fair, consistent, and thorough review has been done?

I’m not advocating the above as THE answer or even AN answer, what I do advocate is inquisitively, creatively, and openly considering any process and how it might be improved. I also advocate learning from what might be considered the “crazy ideas” at the fringes. We can and should brainstorm, hypothesize, and test ideas (hmmm…does that process sound familiar!).

Another area of caution, one which I think all of my fellow Chefs would agree is critical, is being careful not to paint all of scholarly publishing with one brush. The peer review process may very well be completely redefined or reimagined within some disciplines and barely touched within others. That may not be because the areas of little change are populated by luddites or laggards, it may just be because the current process works for them to a large extent.

Obviously, none of us know the future. However, we can all talk about trends, behaviors, and observations and think about what the future might hold. What do you think?

What might be the future peer review?

Ann Michael

Ann Michael

Ann Michael is President of Delta Think, a business and technology consulting and advisory firm focused on innovation and growth in membership organizations, scholarly publishers, and professional information providers. Ann is Past-President of SSP.

View All Posts by Ann Michael


18 Thoughts on "Ask The Chefs: What Is The Future Of Peer Review?"

Many thanks to all for a very helpful summary, and for the collective recognition that there are massive divergences in current peer review practice across disciplines. There is the usual slight SK danger (as Alison M recognises) of a concentration on article-based scientific peer review when long-form peer review in the arts and social sciences constitutes (I would surmise) at least at a third of the actual time spent on peer review in western scholarly practice. It’s also worth stating that a good deal of the long-form reviewing outside the US (including the University Press sector) is NOT double-blind (only the referees are anonymised) and it’s also worth stating that attempts to change in the latter direction have been strongly resisted, especially in some of the humanities and historical disciplines. Fundamentally, an eight page report on a monograph on scholastic philosophy, with (say) three pages of overall assessment and five pages of detailed textual comment, of the sort that book acquisitions editors are used to seeing all of the time, is not doing quite the same thing as a three-line statement that ‘the methodology is sound and the results seem credible’. And that sort of extended engagement constitutes, rightly or wrongly, a major part of the prevailing disciplinary culture in many of the humanities and softer social sciences, expected both by early career researchers and by their senior colleagues. Alison is also absolutely right to emphasise the in-house curatorial role of acquisitions teams.

Whether the customary confidentiality of these extended peer reviews (save to the author and publisher involved), many of which constitute major pieces of scholarly exposition in their own right, is something that is nowadays justifiable, is another matter. There are very tricky and sensitive questions arising (not least of governance), especially in the UP sector where final publishing decisions are often approved by faculty editorial boards. But certainly during my time at CUP there were extended readers’ reports, often by Big Names on Big Names, that the wider scholarly public would have been fascinated to see, although probably (for the sake of all concerned) on a posthumous basis…

I second the observation that double-blind peer review is more the exception than the rule in monograph peer reviewing. In 45 years as an editor at two university presses I never saw double-blind review used once. As for the posthumous value of some peer reviewing, my favorite is the review that Johns Rawls wrote of Herbert Marcuse’s “One-Dimensional Man” for Princeton U.P., recommending against its publication. That review should have been saved for posterity, but alas was tossed by a secretary doing spring cleaning of the reject files.

I do peer reviewing for three library science journals. Editors tell me that authors find my comments helpful and appreciate my usual quick turnaround so that I get a fair number of articles to evaluate. The very good and the very bad are easy. My only comment on a recent excellent article was to add a subtitle with keywords that would make the article easier to discover.

I worry more about the articles in the middle. These normally fall into one of two categories. The less common one is the article that is technically perfect but doesn’t seem to me to have anything worthwhile to say. The topic is too limited, too old, or too unimportant. The other category occurs when the author has something to say but doesn’t say it very well. Some of these articles appear to be written by authors whose first language isn’t English. Others exhibit naiveté about the conventions of scholarly communication. The problem is that I judge both types to need a substantive reworking that goes beyond changing a few things here and there and may be beyond the capabilities of the author. Yet I worry about losing an innovative and exciting perspective on a worthwhile topic.

Prior to the creation of the first academic journal in 1662, researchers penned their ideas and circulated them to colleagues. That community rapidly expanded with the arrival of the print journal. Unfortunately, print reached limitations because of costs hence the proliferation of publication spaces and the creation of “scarcity’ in the premier journals. The creation of journals allowed for researchers of different philosophical and research interests to take the opportunity to create selective volumes.

Today, with the Internet, the bounds created via journals and issues has been lifted, but remains constrained by the idea of “the” journal”. Yet publishers realize that researchers are after the content of the manuscripts and thus now are moving to continuous publishing of accepted materials, later to be collated into “journals”. But the key here is there is now a collegial exchange option which may, in part be responsible for the approaches to early citation, pre and post publication reviews and variances thereof. Peer review for compilation into journal volumes is vestigial.

As was mentioned above, reviews might be jointly handled by a cyborg (now a human with Siri and Watson as companions. The first thing this does is allow rapid scanning of the 34,000 journals or 2.5 million articles for redundancies and persiflage. It basically allows researchers to not have to adhere to a 500 year format where over half an article can consist of introductions and reviews of the literature. It overcomes the momentum to turn the idea of a “letter” into a full article for credit.

Thus, most of the above discussion is taken within an historic context both with regards to how ideas are proliferated and the burden of reviews to tease out what was a footnote turned into an article.

The use of intelligent search/extract/summarize engines should significantly reduce the demands on those seeking or reviewing new knowledge and it should simplify and lower the costs/unit of new information to be circulated. What it means for the publishing community and how they maintain their profitability once the detritus has been stripped out by the research community remains to be seen. At least one publishing house now understands as it moves to propagate access to the articles not being bound by the archaic idea of subcollation in an increasingly obsolete and costly format / credentials can be used.

Peer review is like computer science in that without it we have garbage in garbage out

One potential solution to the ‘revenge’ fear that academics experience with mandatory signed peer review is to delay releasing the reviewers’ names. Time is a healer, and a year after the decision the authors will be more circumspect about the criticisms of their article, and hence less likely to store up animosity towards their reviewers. The prospect of *eventually* being named may still encourage reviewers to be polite (if that’s the problem that mandatory signed review is trying to solve, I’m still not quite sure what the overall point is:

However, that’s just speculation, and what we really need is more research. The van Rooyen study in 1999 is the only one that’s designed to examine how the various levels of anonymity in peer review affect the process (i.e. they used an RCT). That paper has a sample size of just over 100, which is fine, but before we make a big rush into open peer review we really must collect more data. Absent that data, it’s just a fact free discussion.

I agree. I was on the Peer Review Workgroup at the Open Scholarship Initiative meeting earlier this year and the lack of research was a real stumbling block for getting consensus around which direction to go in.

The idea of delaying publication of the reviewer name is an interesting one that I had not considered. Either way, we can start talking about giving reviewers options (sign the review, have your name published, have your review published) and see what happens on a voluntary basis. As with most experiments with changing a culture, baby steps that start as a low threat and then escalate depending on feedback may be the best way forward.

Great post! There is overwhelming support for the principle of peer review. However, there are several surveys that indicate a growing dissatisfaction with the practice. In my view, this is driven in large part by the lack of standards of practice. For example, what should be the minimum required before research papers are included in the scientific literature? Surely, we as a community can come to a collective understanding of what should be the minimum standard!? Until standards of practice are established, we will continue to see the proliferation of new (and largely untested) methods of peer review. We will also continue to see an increase in the variability of peer review practices from one journal to the next. This is not conducive to making the process more efficient for authors and reviewers, which should be a goal of every publisher.
Recent surveys point to a clear demand for more training of peer reviewers, but one can’t develop a training curriculum until we have a better understanding of what works and what doesn’t in peer review. Standards of practice could serve as the firm bedrock on which we can build a core curriculum and elevate the acumen of all peer reviewers.
PRE and the AAAS are working to tackle this issue and we welcome feedback and participation from the community.

I think the relentless negative press (including blog posts) about peer review is a big contributing factor to the sense of panic. When pressed to provide the evidence that it’s really that bad, most point to data-free polemics by former chief editors, or relate stories of a bad review experience they’ve had (while forgetting the nine other times it went really well).

As Alison Mudditt points out, editors are an essential component of the peer-review process. The peer-reviewers are the jury, and no jury can adequately function without a judge to organize it, guide it, and interpret its reasoning. That’s why post-publication peer-review will not flourish. And it’s why editors must be chosen carefully for peer-review to work well.

This is a response from our Managing Editor:
Great article (!)… have to disagree with Mr. Anderson when he says “…the old system of peer review … to maximize the likelihood that important flaws will get caught and corrected before it joins the public record of science – is just too fundamentally sound”. Is he kidding? “Fundamentally sound” ??? I’ve seen, with my own eyes, the degeneration of true peer-review. People are just gaming the system… as much as scientists say they support peer-review and welcome criticism, it is quite obvious they respond to criticism like any other Joe: with indignation. Some journals have become havens for “cabals”, so scientists can promote themselves and help themselves to tax-payer swag. It’s a con, that when executed adroitly, can pay handsome dividends.

Angela Cochran deftly states: “There are still very real fears from junior reviewers that some form of retaliation could come their way for critical reviews”… A-ha!! Someone who recognizes the truth!!

Kalina’a managing editor is on target. As I noted above, sharing collegial research started before the birth of the “journal”. Today, as noted in several spots, the review has become a surrogate to measure more than the research. It includes the researcher for promotion, tenure, potential future funding and much beyond. Thus choice of where to publish, what and how often weigh well beyond the intent to share and receive feedback in a collegial sense. Potential critical reviews, once possible to receive in a collegial exchange has been intermediated by the peer review process with all the trepidations of retaliation and/or rejection since the consequences transcend an honest exchange. The journal publishers are not neutral in these transactions.

This needs to also be considered in context, particularly in the STEM area. Many Ph.D.’s are finding positions outside of The Academy and are finding other venues for presenting even the most basic research and an audience less “critical” in the peer review sense, yet open to such critiques. As I mention, above, intelligent search engines/ humans coupled are now able to access well beyond the academic journals to determine redundancies and similar efforts as suggested by Kalina’s managing editor. This does impact on the value of the journal in decisions regarding allocation of resources within and outside of the Ivory Tower.

The multiple mentions about time to review takes away from research is a euphemistic admission regarding the need to publish, particularly for junior faculty. The potential for “retaliation” points in this direction since space in print journals today are like diamonds controlled by the cartel with all the subsequent problems of an artificially created fiction. The arguments presented by the Chef’s are internal attempts (collusion) to maintain this “collegial” or peer fiction.

Mr. Kalina’s managing editor is way off target in at least one important respect: he substantially misquotes me and then responds incredulously to the misquote. At no point did I say that “the old system of peer review” is “fundamentally sound.” In fact, I never used the phrase “old system of peer review.” Instead, I referred to the “concept of peer review.”

If Mr. Kalina’s interlocutor really is the editor of a journal, I sincerely hope that my library doesn’t subscribe to it.

Whether it is the “old system” of peer review or the “concept of peer review is not the issue at hand. It is peer review as “practiced”, the use of the system today and all the concomitant issues which need serious addressment. The very materials that are published in STEM or STM journals clearly point out that no system is static.

Human systems are subject to interpretation, repurposing, abuse and rationalization, particularly when there are economic interests.

Whether it is the “old system” of peer review or the “concept of peer review is not the issue at hand.

Misrepresentation is the issue at hand. Mr. Kalina’s managing editor attributed a made-up statement to me, and I was setting the record straight. The question of whether and how the practice of peer review needs to be reformed is an important but separate issue.

  • Rick Anderson
  • Sep 26, 2016, 10:01 AM

Leave a Comment