Comic on the quality of different methods of p...
Different forms of Peer Review via Wikipedia

If an electronic document could make a resounding thump as it landed in my electronic mailbox, the latest report on the future of peer-review would have done so.

That resounding thump (or splat!, or any onomatopoeia you prefer) was replaced by an occasional whirl-whirl-whirl as my mouse scrolled through the 117 page single-spaced report by Dianne Harley and Sophia Accord of the Center for Studies in Higher Education at UC Berkeley.

Their report,Peer Review in Academic Promotion and Publishing: Its Meaning, Locus, and Futuresummarizes the state of affairs on the role of peer-review within the academy, provides a set of recommendations for moving forward, and suggests topics for future research.  Appended to the report are the proceedings of a small workshop on peer-review and several background papers with an extensive literature review.

Anyone who has ventured into the peer-review literature knows that it is a topic riddled more with strong opinions than rigorous research. Part of the problem stems around definition:

  • Peer-review is simultaneously a value, a procedure, and a certification
  • It means different things whether you are an author, a reviewer, an editor, or a reader
  • It is valued differently across disciplines
  • There are many, many variants on the peer-review process

This should make any discussion around peer-review heavily context-dependent. In practice, it doesn’t prevent some individuals from making universal — often damning — statements about peer-review. A colleague of mine went as far as to rant at a dinner party that “peer-review was dead!” (We all laughed in response). Harley largely avoids dramatic and hyperbolic language, and focuses on summarizing what is known and highlighting where we should be directing our attention.

If there is a general theme in this report, it is that academic publishing has yoked a system of distribution (journal and scholarly book publishing) to a system of evaluation (promoting and rewarding faculty), and that this coupling has resulted in a dysfunctional system. She writes:

Simply stated, institutional peer-review practices should be separated from overreliance on the imprimatur of peer-reviewed publication or easily gamed bibliometrics, a practice that encourages over-publishing and the selection of low-quality publication venues for peer-reviewed work.

Separating these two systems is no small challenge. In one workshop session (“A very tangled web: Alternatives to the current system of peer review”), several participants debated whether institutions should be paying external experts to review faculty members rather than relying on their publication record. One participant even proposed the creation of a consortium of elite institutions that would offer these services on a quid pro quo basis.

On the face of it, this is not an unworkable proposition, yet a closer inspection reveals its flaws. If expertise in science is based on one’s contributions in the scientific literature, then one must rely on the system one desperately wants to devalue in order to identify and select expert reviewers.

Second, if we consider that each published article went through at least one editor and two reviewers (plus a statistician for the top journals), then a junior faculty member with 10 publications has been reviewed by at least 30 of his colleagues. Compare this with just 2 or 3 reviews that an external system could provide. If reviewers are susceptible to making poor judgments, then we want more of them, not fewer.

Last, putting the fate of peer-review in an elite group of “experts” would result in a radical concentration of power within the system. If critics of peer-review believe the system is already too subjective and biased, they should seriously avoid establishing a Faculty of Cronies.

There are other proposals for reforming the publication system described in the report — such as overlay journals — although Harley focuses more on describing them rather than evaluating their merit.  She does maintain that there are opportunities to learn from our failures just as our successes.

Several recent surveys have reported that scientists are generally satisfied with peer-review and believe it improves the quality of journal articles.  To them, the system is neither “broken” nor “dead.” This puts the status quo in direct opposition to what the experts Harley selected for her report have to say.

The real strength in the Harley report is not a validation of what most scientists believe, but to glimpse inside what influential faculty, librarians, and foundations are thinking about the system — whether right or wrong — and how they wish to change it.

Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist. https://phil-davis.com/

Discussion

9 Thoughts on "Should Universities Pay for Peer Review?"

Gathering data for these sorts of reports can be, as you point out, tricky. Those who are most vocal about a topic are often advocates for change, driven by their own reasons. Someone who is generally satisfied with a situation is unlikely to spend much time ranting about their satisfaction. These opinions are much easier to find, but are often outliers and not representative of the silent majority of a field.

Can you provide more information on the “experts” chosen here (I’m not sure I’m willing to read the entire 117 page report)? It would help to get a sense of how balanced the study was.

I think the general problem on this topic is how to identify “expertise”.

For most topics concerning science, we identify those who have considerable experience in studying the subject. There is a small but dedicated group of researchers who study peer-review and even convene their own conference.

The Harley report gathers a different group of “experts” — not researchers, but those who have experience with the direct (or indirect) effects of the peer-review system, like provosts, librarians and publishers.

The group of experts selected for this report include people like Keith Yamamoto, who organized a UC system-wide boycott of NPG journals.
Yamamoto was also at the center of a UC-wide boycott of Elsevier journals in 2003.

These individuals can add a lot to the discussion of peer-review, but we need to remind ourselves that they may not reflect the largely silent and satisfied majority of scientists.

Some inaccuracies in your analysis –

“…if we consider that each published article went through at least one editor and two reviewers (plus a statistician for the top journals), then a junior faculty member with 10 publications has been reviewed by at least 30 of his colleagues…”

Not very likely. Most people publish their best work in 2-3 journals, and in many cases are handled by the same editors and partly overlapping sets of reviewers.

“…Compare this with just 2 or 3 reviews that an external system could provide. If reviewers are susceptible to making poor judgments, then we want more of them, not fewer…”.

Tenure and faculty promotions at my institution are based on letters from external reviewers, a file will typically accumulate at least 10-12 such reviews by the end of the process. Since the reviewers must be prominent figures in the field, de facto this is an “elite group of experts”.

Mike,
I follow your argument, but a consortium of expert reviewers necessarily limits the scope of participation. If Cornell forms a reviewing consortium with Harvard, Yale, Princeton and Columbia, for example, it eliminates the possibility that some of the experts in the field are located at Stanford, Berkeley, Oxford, or even the Technion in Israel. The “invisible college” created around the international journal is far more inclusive than any review consortium that could be devised.

More importantly, however, is the fact that the definition of “expertise” becomes circular and insular: You are an expert, by default, if you belong to an elite group, but you cannot be reviewed by an expert unless you belong to an elite group. Entry into these elite groups becomes much more difficult if you start from outside. It is the antithesis of meritocracy.

I don’t disagree that the system today is already highly stratified, but creating these faculty expert review groups intensifies the exclusivity in science.

I agree that an expert group /reviewing consortium drawn from a limited pool of institutions can’t work, at least not for all fields. This doesn’t mean that one has to depend on the invisible bunch of unknown referees that reviewed a candidate’s publications. One tries to get reviews of a candidate from the best people in his/her field, who can also evaluate that person’s entire research activity to date, rather than one specific paper or another.

As Diane commented below, we’re delighted to have the Scholarly Kitchen’s perspective on our report and opportunity to respond in true “open peer review” fashion. I’d particularly like to thank Mike F. for correcting Phil’s analysis re: our recommendation to strengthen institutional review.

First of all, we are not advocating devaluing the publication-based peer review system, but simply encouraging more meaningful publication by reforming what we do have control over: institutional peer review. Let publishers publish the best work as they judge it, but let institutions also reward the best work as they independently judge it (especially since institutions are *already* paying for peer review).

Second, we’re not proposing that universities create alliances of elite scholars who exclusively act as gatekeepers in a field. We’re proposing, more broadly, that universities figure out who the best reviewers would be (anywhere in the world, at any institutional level) for individual faculty members and create real incentives to secure quality reviews. We’ll let the scholars sort out the logistics as it makes sense in their subfields.

Third, it is true that a junior faculty member with 10 publications will have been reviewed 30 times. But, what is not certain is whether these 30 reviews represent 30 colleagues (or only 2 or 3 who are repeatedly approached and may even copy/paste reviews). We believe that strong, empirical research should be done on this issue to discern, in fact, how many people are really passing judgment on an individual. Only such a research program can truly discern how democratic our current system of peer review really is. Anyone know any PhD students out there in informatics looking for a dissertation topic?

Sophia, thank you for your response.
Your tone is somewhat defensive but I assure you that my intention was to highlight the report’s strengths as well as its weaknesses. Before reacting to your response, however, I need to clarify that there is no perspective of The Scholarly Kitchen. The post reflects my views alone. I am not a publisher, but a postdoctoral researcher in the field of communication and someone with 11 years of experience as a professional academic librarian. If there is any perspective of The Scholarly Kitchen, it is an emphasis on critical evaluation. We let others do the press releases.

With that no out of the way, the central problem that forms the foundation of this study (and your earlier study) is about rewarding academics. Unlike traditional economies where individuals exchange goods and services for wealth, the economic model for academics works through a gift economy: Scientists exchange information for recognition.

In his seminal book, The Scientific Community (1965), Hagstrom describes it succinctly:

“In science, the acceptance by scientific journals of contributed manuscripts establishes the donor’s status as a scientist — indeed, status as a scientist can be achieved only by such gift-giving — and it assures him of prestige within the scientific community…The organization of science consists of an exchange of social recognition for information.” (p.13)

Yet, the exchange does not stop there, and we should be cautious of the popular but incomplete notion that the exchange is a one-way street (e.g. authors give away their manuscripts and libraries have to buy them back). Social recognition in science is rewarded by a job at a prestigious institution, access to equipment, resources, experts, trained technicians, etc. Excelling in the exchange economy results in promotion and tenure, positions on editorial boards, easier access to grants and minions of graduate students and postdocs banging on your door to work for you.

Publishers enable this transfer of information and the economy to function.

But the value that other scientists (and deans, provosts, p&t boards, the general public) put on journal brands is not entirely of the publisher’s making. Social recognition comes from the scientific community itself and there is little a publisher can do if the community does not recognize and respect its brand. Similarly, those journals with a highly-valued brand defend it fiercely.

To excel in science, publishing is not enough: One must publish well.

The imprimatur allows for various quality signals to be sent out on the worth of a scientist’s contribution. It is not perfect, I admit, but it is much better than the generic signal that is sent out when a paper is put in a generic repository and peers are required to evaluate it post hoc.

In sum, the journal represents much more than a medium for making information public. It allows for the creation of various signals on the relative merits of a scientist’s work. Without that distinction, we would need to recreate it somewhere else. One could argue that keeping the value of the imprimatur with the publisher is more efficient than creating a separate institutional-level system for evaluating the work of a scientist.

Thanks Phil for, as always, a thoughtful review. And also for taking the time to do some deeper reading, which is always a challenge in the world we live in.

By way of background for your readers who are not able to dip into the report, we set out to do a scan of the complex issues of most import to the academy based on our previous research and our meeting participants’ advice. A shorter treatment would not in our opinion have done justice to the complexity that was our charge to elucidate, nor would it have made possible the literature review we felt was essential to move us away from the rhetoric we both agree tends to dominate discussions about peer review, and especially its relationship to open access publishing, library budgets, Web 2.0 utopianism, and so on.

I don’t think your comments are distinguishing (or you did not see that we were distinguishing) among the many types of peer review in the academy, and consequently you seemed to only focus on peer review in publishing. Also to focus on one small recommendation regarding paying for external institutional reviews does not do the breadth and scope of the report justice. This recommendation was one among many that were floated as solutions to the pernicious problem of committees not reading tenure and promotion dossiers and relying too much on secondary indicators such as imprimatur and bibliometrics (but again it was a recommendation for institutional reviewing not publication peer review, and of course such a tweak to the system would require some serious thinking through, which you duly note and I know my co-author Sophia Acord will comment on separately. Of course, paying for peer review in publishing is not unheard of and can be found in economics and of course university press monographs).

A number of ideas in the report that we would like to alert your readers to that you did not.

It is our position that the entire global academic publishing system is not sustainable on multiple levels, and research assessment type exercises in the EU and elsewhere appear to be ratcheting up the imperative to publish no matter what the quality of publication. (As a side note, we have seen more interest from the UK than the US in this report I suspect because of their history of RAE’s, and some would say their catastrophic effects on the higher education enterprise). As we demonstrate, the entire system is strained to breaking.

From a financial perspective, your blog title might have read “Should Universities Continue to Pay So Much for Peer Review?” Peer review is a way of life for academics. Those in research-intensive universities spend literally untold (and unlogged) hours on it; universities (and tax payers) pay their salaries to do it. As we note in our typology of different types of peer review, this significant time includes not only peer review for publishing but also as importantly, assessing grad students, writing letters for external tenure and promotion reviews, institutional review, reviewing grants, major awards, etc.

We are not advocating devaluing the publication-based peer review system, but simply publishing less frequently and more meaningfully. We also advocate specific empirical research to truly discern how effective and efficient the current system of peer review, writ large, really is for those with the most skin in the game.

The Mark Ware report, which we cite, was as you know focused only on publication peer review. As such, it’s general conclusions are not at odds with ours. What it does not do is stand back and look at publication peer review as just a percentage of the total peer review being conducted by scholars and what the ultimate costs to academy might be for this labor. As an aside, the survey had an effective response rate of 7.7%. Unless things have changed since I went to grad school, <10% is a very small response upon which to make sweeping generalities about the total population of interest. In any event it is a well thought out survey instrument and certainly an important report with regards to the entire question of publication peer review and opinions about it by a small subset of the total population of interest.

A note on our methods (p. 5-6). Of the hundreds of scholars, scholar-editors, librarians, and publishers that we have talked to over the six+ year period of our Future of Scholarly Communication research, the general consensus is that peer review is like the old saw about democracy–even with all of its flaws, it's the best available system we have. We have heard again and again that peer review in all of its guises is the coin of the realm but is also exhausting personal resources.

Finally, I would like to correct your suggestion that the current report was informed only by conversations with the meeting participants. We mined also the large volume of relevant published and unpublished material and interviews that forms the basis of the publication, Assessing the Future Landscape of Scholarly Communication: An Exploration of Faculty Values and Needs in Seven Disciplines (Harley et al. 2010), as well as other work we have conducted. This included more than 160 formal interviews across the planning grant and the larger investigation; covering 45+ institutions and encompassing 12 disciplines. Additional investigatory work included attendance at many meetings, private conversations, literature reviews and web scans, and so forth, over this period. Many of our informants are directly involved in running universities, paying the bills (including library budgets), advising on federal policy, and generally managing the academic enterprise at their institutions and for their respective disciplines.

Again, thanks for your review. I do hope some of your readers interested in these complex topics will have a chance to look at the report and we welcome any feedback.
Diane

=====================
Diane Harley, Ph.D.,
Principal Investigator and Director, Higher Education in the Digital Age Project,
Center for Studies in Higher Education (CSHE)
University of California, Berkeley, CA 94720
http://cshe.berkeley.edu/people/dharley.htm
email: dianeh /at/ berkeley /dot/ edu

We are not advocating devaluing the publication-based peer review system, but simply publishing less frequently and more meaningfully. We also advocate specific empirical research to truly discern how effective and efficient the current system of peer review, writ large, really is for those with the most skin in the game.

Diane,
I think you are misinterpreting my post. I don’t accuse you of devaluing the publication-based peer review system, only that some of your recommendations, when worked through more carefully, don’t seem to solve the problems they are trying to address. Almost as soon as Eugene Garfield created the Science Citation Index and the Impact Factor, academics began complaining about how they were being misused. In spite of the backlash against using the imprimatur and secondary indicators for the evaluation of academics, our collective dependence on them have only increased. Irrational exuberance and herd mentality cannot explain this trend.

While I praised you for recommending empirical research on the topic, we shouldn’t lose sight of natural experiments underway. While we often hear of publishers and libraries launching new projects and services, we rarely hear when the project fails or the service is turned off. Many of us have skin in the game. Admitting when we got things wrong may help future thinkers get it right.

Comments are closed.