If an electronic document could make a resounding thump as it landed in my electronic mailbox, the latest report on the future of peer-review would have done so.
That resounding thump (or splat!, or any onomatopoeia you prefer) was replaced by an occasional whirl-whirl-whirl as my mouse scrolled through the 117 page single-spaced report by Dianne Harley and Sophia Accord of the Center for Studies in Higher Education at UC Berkeley.
Their report,“Peer Review in Academic Promotion and Publishing: Its Meaning, Locus, and Future” summarizes the state of affairs on the role of peer-review within the academy, provides a set of recommendations for moving forward, and suggests topics for future research. Appended to the report are the proceedings of a small workshop on peer-review and several background papers with an extensive literature review.
Anyone who has ventured into the peer-review literature knows that it is a topic riddled more with strong opinions than rigorous research. Part of the problem stems around definition:
- Peer-review is simultaneously a value, a procedure, and a certification
- It means different things whether you are an author, a reviewer, an editor, or a reader
- It is valued differently across disciplines
- There are many, many variants on the peer-review process
This should make any discussion around peer-review heavily context-dependent. In practice, it doesn’t prevent some individuals from making universal — often damning — statements about peer-review. A colleague of mine went as far as to rant at a dinner party that “peer-review was dead!” (We all laughed in response). Harley largely avoids dramatic and hyperbolic language, and focuses on summarizing what is known and highlighting where we should be directing our attention.
If there is a general theme in this report, it is that academic publishing has yoked a system of distribution (journal and scholarly book publishing) to a system of evaluation (promoting and rewarding faculty), and that this coupling has resulted in a dysfunctional system. She writes:
Simply stated, institutional peer-review practices should be separated from overreliance on the imprimatur of peer-reviewed publication or easily gamed bibliometrics, a practice that encourages over-publishing and the selection of low-quality publication venues for peer-reviewed work.
Separating these two systems is no small challenge. In one workshop session (“A very tangled web: Alternatives to the current system of peer review”), several participants debated whether institutions should be paying external experts to review faculty members rather than relying on their publication record. One participant even proposed the creation of a consortium of elite institutions that would offer these services on a quid pro quo basis.
On the face of it, this is not an unworkable proposition, yet a closer inspection reveals its flaws. If expertise in science is based on one’s contributions in the scientific literature, then one must rely on the system one desperately wants to devalue in order to identify and select expert reviewers.
Second, if we consider that each published article went through at least one editor and two reviewers (plus a statistician for the top journals), then a junior faculty member with 10 publications has been reviewed by at least 30 of his colleagues. Compare this with just 2 or 3 reviews that an external system could provide. If reviewers are susceptible to making poor judgments, then we want more of them, not fewer.
Last, putting the fate of peer-review in an elite group of “experts” would result in a radical concentration of power within the system. If critics of peer-review believe the system is already too subjective and biased, they should seriously avoid establishing a Faculty of Cronies.
There are other proposals for reforming the publication system described in the report — such as overlay journals — although Harley focuses more on describing them rather than evaluating their merit. She does maintain that there are opportunities to learn from our failures just as our successes.
Several recent surveys have reported that scientists are generally satisfied with peer-review and believe it improves the quality of journal articles. To them, the system is neither “broken” nor “dead.” This puts the status quo in direct opposition to what the experts Harley selected for her report have to say.
The real strength in the Harley report is not a validation of what most scientists believe, but to glimpse inside what influential faculty, librarians, and foundations are thinking about the system — whether right or wrong — and how they wish to change it.
- ‘Facebook of Science’ Seeks to Reshape Peer Review (Chronicle of Higher Education)
- Are Peer-Reviewers Overloaded? Or Are Their Incentives Misaligned? (Scholarly Kitchen)
- The Price of Transparency and Peer Review (Scholarly Kitchen)