Editor’s Note: Today’s post is by Mark Edington. Mark is the founding director of the Amherst College Press and the publisher of Lever Press, two initiatives to build a pathway for peer-reviewed, digitally native scholarship from a liberal arts perspective through a platinum open access model.

In a number of recent posts in The Scholarly Kitchen (TSK), contributors have offered a variety of perspectives on the practice of peer review. Just this year, Robert Harington has pointed out the need for publishers to ensure that reviews generated by independent referees offer something of value to authors, while at the same time seeing to it that reviewers receive some form of recognition for their work. Tim Vines has discussed how the efforts of authors to reassert control over the peer review process risks underestimating the critical roles played by editorial professionals in the publishing process, and has drawn attention to a poorly sourced article on peer review that itself managed to show up the weakness in the implementation of peer review. And David Crotty explored the question of including the findings and arguments of non-peer reviewed materials, available through preprint repositories, in work published through formal, peer-reviewed processes. Indeed, a search through TSK’s archives turns up no fewer than 43 separate blog posts that include “peer review” in the title.

window washing

Readers of the Kitchen hardly need reminding that peer review, as both a practice and a matter of reputational concern for scholarly presses, stands near the center of what we mean when we say “scholarly publishing.” It would be reasonable to conclude — given the centrality of the practice to the core claims scholarly publishing makes to distinctiveness and value — that there would be some clear set of standards, some agreed-upon set of definitions, for how this critical undertaking is performed.

This might be all the more expected in view of the weight placed on peer review in underwriting the claim to authority that scholarly publishers make as to the unique value of what they set in the hands of readers. As long ago as 2000, this notion was at the very center of an effort by leaders in academe, research libraries, publishing, and learned societies to set out some navigational aids for the charting through the waters of change in scholarly communication. Their “Principles of Emerging Systems of Scholarly Publishing” — also known as the “Tempe Principles” — argued, inter alia, that,

…the system of scholarly publication must continue to include processes for evaluating the quality of scholarly work[,] and every publication should provide the reader with information about the evaluation the work has undergone.

The first clause in this phrase is uncontroversial; it essentially says peer review is part of what scholarly publishing does. But the second, seemingly anodyne, is anything but. In what way, exactly, do publishers “provide the reader with information about the evaluation the work has undergone”? At least in the world of scholarly monographs, there are no clear or consistent systems for performing this simple function across presses. Indeed, while giving an assurance that a press conducts systematic peer review is a condition of membership in the Association of University Presses, and while two years ago the Association took a significant step in publishing a “Best Practices for Peer Review” document offering guidance to editors and editorial boards, the association itself sets no standards or minimum requirements for what peer review means.

It is somewhat perplexing that a practice both central to our claim to distinctive authority as publishers, and implemented by all of us, does not have clearer, more public standards — or a way of sharing with readers how those standards have been applied. This seems like a first-order problem, especially in a moment in which the value of scholarship itself — and the knowledge it sets forth — are increasingly relativized or simply dismissed.

In view of this, I have been thinking — in close collaboration with Amy Brand of the MIT Press — about what specific steps might be taken to achieve greater transparency in the practice of peer review; both how the various forms of review, both traditional and emerging, could be defined and how they could be communicated in simple, clear ways with readers. A first and critical inspiration for this work was the realization by my colleagues on the editorial board of the newly launched Lever Press that the best way of addressing the reputational challenge facing open access presses — the widespread but ungrounded notion that there is some iron law of nature linking open access as an outcome to a poor peer review process — was to state in a public way both the understanding we have of peer review processes, and a commitment to disclosing, in each title we publish, which process we have implemented. Not surprisingly we have been inspired and guided by the work of Creative Commons, which has succeeded in equipping creators with new tools to specify and tailor the rights they are willing to share with a simple system of symbols (or “buttons”) each linked to a plain-language document, a more comprehensive license, and a bit of machine-readable code to help cataloging systems identify and share those rights.

With generous support from the Open Societies Foundation and the American Academy of Arts and Sciences, we convened a gathering of stakeholders in Cambridge, in January of this year to share our work and explore a variety of the questions that shape the current conversation about the place, conduct, and labor of peer review. Attendees included colleagues from the worlds of research libraries and scholarly publishing, learned societies, researchers with a focus on scholarly communication, and technology innovators working to create preprint repositories, establish systematic means of assigning credit for the labor of writing reviews, and provide the systems of metadata that enable the greater development and discovery of the scholarly record.

We created a report summarizing both our preparatory week and the conversations of our gathering and shared it back with our colleagues for comment and annotation using PubPub, MIT’s open publishing platform. And with the help of contributions and fruitful insights from these colleagues, we’ve now developed a final report of the work.

The report suggests that, while scholarly publishers differ in many ways — institutional affiliation, audience, business model, editorial process, for example — the fact that we all share a commitment to peer review is a signally important common link that distinguishes what we publish from the work of all other publishers. This distinction, on which we base our claim to the value and authority of what we set before readers, is not well served by being a “black box” phenomenon, effectively making our titles blind items (think of mattresses) with the marks of quality unseen inside. (The characteristic of being a blind item — in which claims of quality are based on unseen (and invisible) characteristics — is one of a number of ways in which scholarly monographs have come to behave, from an economic standpoint, like luxury goods, which are often in this category.) It is time, we suggest, for publishers to join in an effort to articulate just what is meant by widely used but imprecise labels like “blind,” “open,” and various descriptive qualifiers.

The report also proposes that we can and should do much better at disclosing to readers both how we apply various forms of review — made clear on the basis of definitions established and promulgated in a public way — and the object to which the review is being applied (e.g., a manuscript, a proposal, a dataset, etc.). Following the inspiration of Creative Commons, we think that this can be accomplished by the development and consistent implementation of a system of symbols, or icons, that convey both the kind of review undertaken and the focus of that review. That said — and, again, instructed by the Creative Commons example — it is by no means enough to create and use a clever system of symbols to disclose the peer review unless those symbols are in some way tied to, and grounded upon, a set of definitions that are understood to be commonly held by principal stakeholders in the system; publishers, yes, but also librarians, learned societies who act as publishers, and of course authors.

As greater emphasis is placed on discoverability and the relationships between authors, reviewers, and ideas, we will only be able to create greater consistency in the conduct of peer review and greater transparency in disclosing by ensuring that our metadata include information about what forms of review were conducted on a given published object (and, where appropriate, links to the reviews themselves). Existing systems for creating means of persistent identification for scholarly objects — notably digital object identifiers (DOIs) and identifiers for researchers, such as ORCID iDs — should be seen as critical to enabling greater transparency in peer review. As such, they should be tapped as participants in helping to design the systems to achieve this transparency.

We are under no illusions that our work offers a fully developed or implementable solution. But by sharing our efforts and the insights of our colleagues, we hope the other stakeholders will now take leadership roles in creating conversations among their own constituents. We know there are many important questions that our suggestions do not address — the question of how and whether reviews of preprints could use such a scheme, how it would relate to the work publishers do (since they would not, by and large, want to provide warrants for a review system they did not in some way oversee), and how to assure that such a system would not be taken advantage of by unscrupulous actors.

Having found so many colleagues who agreed with our sense that the time has come to find a better way of sharing with readers a clearer account of what we do as scholarly publishers, and who had so many good ideas for how to achieve this, we now want to share our report more widely.  We hope that it will catalyze conversation and collaboration among all who share an interest in assuring that the distinctive qualities of scholarly publishing continue to offer clear and meaningful contributions in the exchange of ideas, even in the midst of what seems a moment of discourse ungrounded in facts and thought.

Mark Edington

Mark Edington is the founding director of the Amherst College Press and the publisher of Lever Press, two initiatives to build a pathway for peer-reviewed, digitally native scholarship from a liberal arts perspective through a platinum open access model.

Discussion

29 Thoughts on "Guest Post: Greater Transparency in Peer Review Standards and Practices –A Report on Work in Progress"

The basic assumptions of this post are not correct. The first is that peer review=editorial quality. Peer review is only one of many editorial inputs. Indeed, the most prestigious publishers rely more heavily on editorial staff. Second is the fact that what makes the best publishers as good as they are (as measured by the quality of their publications) is a trade secret. To disclose anything but generic information undermines the special value these publishers provide. This post is a proposal for the commoditization of the publishing process. Superior publishers need not apply.

As I’m sure you know, Joe, one of the membership criteria for membership in AUPresses is a commitment to conducting “the peer review of their scholarly publications in a manner consistent with commonly understood notions of peer review among university presses.” There’s no assumption that peer review equals editorial quality. There’s just a statement of fact: university presses (at least those that are members of AUP) conduct peer review, and a commitment to doing so is what sets them apart in a distinctive way from other sorts of presses. Other sorts of presses can (and do) publish things of high quality; but that’s not the point here.

But this is not what you said in your post. You move quickly from a comment about editorial quality to the implication that peer review is what ensures that quality. And this is simply not true. Peer review is a guarantor of some aspects of editorial quality, but not all aspects. Better to let the process rest with the publishers, whose success in managing this is reflected in their relative prestige. I used the word “commoditization” deliberately.

Joe, could not agree more. I stopped reading once I realized this was the case . . .

“We think that [describing the rigor of peer review] can be accomplished by the development and consistent implementation of a system of symbols, or icons, that convey both the kind of review undertaken and the focus of that review.”

Icons, badges, emojis, … Really? Seems to me the only way to demonstrate the rigor of the peer reviews would be to open up the anonymized process. Put the full text reviewer comments, and author responses online, much as science journals already do with supplemental information. Not sure how many journals besides F1000 and eLife do this, and who actually reads it besides the authors’ rivals.

I am with Chris Mebane: are we really going to emojify the peer review process as a way to make that process more “transparent”? I have almost 1,000,000 objections to this, but I have to think through them very carefully, and will likely write a blog-post response at the punctum books blog later today or tomorrow. But, in short, the traditional peer review process is broken and failing authors in so many ways that this Peer Review Transparency project does not even touch upon or even come close to acknowledging (such as the fact that there is barely any institutional credit that is actually valuable for researchers who agree to review other researchers’ work, and the fact, also, that there are huge gender, race, status/rank, and other imbalances as well in who agrees, and who doesn’t agree, to serve as a reviewer. And even with the AAUP’s recently released pamphlet for “best practices” in peer review, we still don’t have a template that we could share with reviewers as to “how to write an actually useful & respectful peer review.” Reading through this post and looking at the graphic symbols developed for this project, I thought of all sorts of symbols I would like to develop, such as a symbol for “50 well-regarded and successful male scholars turned down our invitation to review this book until one really beleaguered female scholar agreed to do it because we begged her and used personal leverage.” How about this, every time we publish a book, we list the names of all of the people who refused to review it or didn’t even respond to emails asking for assistance, along with the names of those who actually undertook the review. Can I have a symbol for that? How about a symbol for “this book is freaking brilliant and it doesn’t take a so-called genius reviewer to tell us what we already know: we need to publish this book.” Can we have a symbol for that? How about a symbol for: “this book got two really terrible and unhelpful reviews, so we threw those out, and asked for a 3rd review, and that one was great, so then, finally, this book saw the light of day, as it deserved to”? Or a symbol for “we purposefully did not send this book out for peer review because the author has too many enemies who are shills for whatever the current, predominant critical paradigm is in their field, but we know this book is important and needs to be published”? I could go on and on, but for once, I won’t.

Plenty of allegory, but little substance. The quality of peer review is correlated with editorial quality (poorly selected or vetted peer reviewers reflect poor editorial standards). But superficial elements, such as badges, are not the solution. Open peer review is flawed, because the reviewer is open to criticism (more than praise). And when all are not substantially remunerated for their work, the system becomes exploitative overall. How will Lever Press deal with these issues that affect it as much as they affect any publisher?

To begin with, it is important to acknowledge how different peer review functions in publishing monographs compared with how it works in journal publishing. One commenter has already referred to the key role played by staff editors, which has no counterpart at all in journal publishing (where the initial decisionmaking editors are academics themselves, not staff editors). There is also no counterpart to the faculty editorial boards that all university presses have, which generally make the final decisions about what to publish. The interaction between staff editors and those editorial boards, and the way both of those deal with external peer reviews, is key to how the whole process works. (I cover this in great detail in my “Listbuilding at University Presses” in James Fyfe and Rita Simon, eds., Editors as Gatekeepers, Rowman and Littlefield, 1994: https://scholarsphere.psu.edu/concern/generic_works/sf268g973) There is no counterpart, either, in journal publishing to the use of advance contracts, which is very prominent in book publishing. Peer review is thus a great deal more complicated for books than for journal articles and, for that reason, more difficult to systematize. For one thing, editorial boards differ greatly in their size and functioning, so how they operate in the peer-review setting differs from one press to another. How are you going to be “transparent “about that: list the names of all the editorial board members and explain how they had input into the decision to publish the book? Some boards operate as full boards on every book; others break down into sub-boards focusing on specific areas like humanities, social sciences, natural sciences, etc. On some boards each project is assigned to one board member for close scrutiny and that board member then reports to the rest of the board; on other boards, all members of the board read all the reports and enter the discussion. How are you going to explain that to the public at large? Then consider also that every press has its own reader’s report, and the questions reviewers are asked to address vary from one press to another. How are you going to explain that–reproduce the reader’s report in an appendix? Presses also use advance contracts in different ways and with different language and different expectations. Peer review often is applied differently to projects at the advance contract stage compared with the final stage. How is that going to be made clear to the public? Furthermore, commercial academic publishers use peer-review systems very similar to those of university presses, except that they have no faculty editorial boards making trhe final decisions. How is the public going to be apprised of that difference? You all know that scholarly publishing of monographs is a very complex business, which is a mystery to much of the outside world. It is similar in that respect to university admissions policies, procedures, and decisions: those remain a mystery to outsiders as well (and to me as well even though I have been interviewing applicants to my alma mater for over 20 years and have been running college fairs for almost a decade). Universities have not been able to make that process very “transparent” after all these years. I’m not very optimistic that academic presses will succeed where admissions offices have failed.

One commenter has already referred to the key role played by staff editors, which has no counterpart at all in journal publishing (where the initial decisionmaking editors are academics themselves, not staff editors).

To be fair, this varies quite a bit from publisher to publisher, and from journal to journal. Many journals employ professional, full-time, in-house editors that make these sorts of decisions. Others employ working academics.

Yes, I continue to be disappointed that the notions behind PRE haven’t caught on. I think the real issue is that they’re less valuable for the established publishers that could afford to monetarily support such a service than they are for the smaller publishers trying to establish themselves, who can’t offer the financial support needed for such a service to take hold. It’s hard to establish standards when the big players don’t see the value in joining in.

I’m a bit surprised at the negative response to what seems a good faith effort to provide some structure, or at least a common language to describe what is often a quite variable process. (As an aside, I’m sadly not surprised by the tone — a reminder to all, we’re working to make our comments section a more welcoming place, so please consider the tone of your comments as if you were asking a speaker a question at a public meeting).

Yes, peer review is a complex activity and there are no easy answers, and in some ways, the variations in how peer review is performed is part of what differentiates one publisher from another. And I agree that emojis and badges seem an artifact of the (hopefully) obsolete notion that the key to motivating users in a digital age is “gamification”.

But I would dearly love to have a better defined set of terms around peer review. When an author is told by a journal that they practice “open peer review” that could mean a host of different things (are the peer reviews published alongside the article? Are the peer reviews signed? Who knows whose identity?). It would be nice to be able to answer that with a few terms rather than a few paragraphs. Similarly, PLOS ONE has been around for 12 years now, and we still don’t have a descriptive term for the style of peer review that they pioneered.

Or consider the problem of differentiating between a predatory (deceptive) publisher and a small, inexperienced startup publisher that is trying to make an honest effort but that has a lot to learn. Are there standards we can offer that honest publisher to help them evolve to where they need to be?

Suspect the problem leading to negative responses is that this approach is trying to wise us up by dumbing us down (not intentionally I’m sure).

Peer review is ill defined precisely because of the many variables that come into play.

As peer review is defined by the Editorial team, then within each publisher there are so many variations. And themes within themes (how many reviewers approached, how many accepted, what are their credentials for reviewing this piece of work, how many reviews received, the weighting given to each – often an unconscious process as the Editor has more respect for the established academic reviewer over the postdoc, etc etc). To state this for each article is potentially more burden for probably little real value given how many articles individuals read/browse/download each week.

And are badges, emojis, whatever really the answer? For predatory journals they are potentially easy to mimic. Who governs and polices it?

Is that a problem though? Is the reader done a disservice when an editor decides that they know and trust the author (or one of the reviewers) and give the paper less scrutiny? Would you want to know this when reading a paper — this paper went through rigorous peer review, this one didn’t? Is there some level of balance between standardization and the subtle activities that a really good editor can bring to bear on a given work? As an author, would you rather know in advance what the peer review process is going to entail or is it better to go in blindly? Are there at least broad, general definitions and standards that can be agreed upon to add more transparency?

But yes, you’re right about badges and emojis, and I agree that beyond being pointless, they are easily faked as well.

Thanks for all these comments, which illustrate how vexed the question of peer review has become (and perhaps how foolhardy it was to venture out into these waters!).

To be clear, I surely understand that there are a great many problems with the peer review process, problems many of these comments have raised — about assuring that institutions accord appropriate acknowledgement for the labor of peer review; about assuring that those who provide the work of peer review receive credit for their work; about diversifying the pool of reviewers; about the benefits of, and challenges to, opening the entire process and moving toward a fully open system; etc.

All of those are very important questions. I hope we’ve been clear from the very outset that they are important, and equally clear that they are not the question we are addressing (and we realize that there are many people already addressing these questions, and doing it well). I also hope it may be allowed that one does not have to presume to address all problems associated with peer review to try to address one of them.

As a commitment that scholarly publishers share, peer review at its basis reflects a notion about
what distinguishes scholarly publishing from the publishing of scholarly (and other) ideas not done by a scholarly press. It is simply that it is a work of the scholarly commons; that it is, in other words, a way of approaching publishing that involves a conversation broader than that between an editor and an author. How that conversation is shaped — whether it’s fully closed, partially closed, open, or whatever — varies from publisher to publisher and from case to case. What does not vary is the simple idea that scholarly publishers get called “scholarly” not just because (perhaps not at all because) of the content they publish — some publishers that are not members of a group called “scholarly publishers” publish content that we would recognize as having scholarly merit — but rather because of the process of review, critique, and revision that they are committed to putting in place.

Those familiar with the “three ‘layers’ of licenses” described by Creative Commons (https://creativecommons.org/licenses/) will see that we’re up to something similar here — and similarly limited in scope: simply a way of defining our terms, signaling them to readers, and making them part of the metadata associated with an object we’ve published.

There is an aspect to this which, in a closed (or “blind”) system of review, does indeed remain secret — viz., the specific content of the critique and the identities of those enlisted as reviewers. But the fact of having this process in the first place is hardly a trade secret; it’s what we all say we do. And we are not proposing here to make all peer review open, although we’ve surely heard from participants in our work the view that such an outcome should be a goal. All that we’re proposing here is to be a little clearer about just what process we’ve employed in reviewing each piece we publish, and to be prepared to explain why we reviewed it as we did. C’est tout.

Let me be clear: by raising a lot of questions, I do not mean to suggest that this effort is not worth undertaking. Even if it fails in its ultimate ambitions, we will undoubtedly learn something valuable from the attempt.

David Crotty pointed out that some journal publishers have professional staff who do some editorial work similar to what the staff of university presses do with books. But I’m not aware of any university press that employs such staff in their journals departments. This is usually confined to the largest STEM journal publishers who can afford to hire such staff.

The emphasis on peer review as what sets scholarly publishers apart may mislead some into thinking that only considetrations of merit enter into publishing decisions. That may be true for journal articles, but it is manifestly not true for university presses whose decisionmaking crucially involves considerations of sales potential, just as the decisions made by commercial publishers do. (The exception may be open-access publishers like the one you head at Amherst, Mark, where the idea is to make decisions not dependent on projected sales.) One also has to wonder which is considered a scholarly publisher and which is not. Is Basic Books a scholarly publisher? Is Public Affairs Press? Lynne Rienner Publishers for which i acquire books in political science definitely is because it publishes no trade titles at all. But Basic Books and Public Affairs Press do, and in fact this type of publisher only publishes trade titles. Does offering a trade discount disqualify a publisher from being considered “scholarly”? hardly because all university presses (except OA presses) do also. So, how does the line get drawn, and where does peer review factor into drawing that line?

Finally, another difference between journal and book publishing is that peer reviewers get paid in the latter but not in the former. Does that make any difference to how peer reviewing is done?

An example might be Cold Spring Harbor Laboratory Press, which employs professional editors to run its journals.

By all means let the project go forward. But no reputable publisher should participate it. Why would a publisher want to commoditize itself? Why would the Lancet emulate eLife?

I’m pretty shocked by the dismissive tone and almost-deliberate misreading of Mark’s piece that pervades most of these comments. Full disclosure: I’m a member of the Lever Press editorial board, and one of the people most involved in working with Mark, Amy & others to develop this project. I’m also not a publisher or editor, but a faculty member who has written & edited books, and co-founded an open access academic journal, so I come at it from a different perspective. A few brief points:

– Think about it from the perspective of academic writers & the professional systems that we are working in. We are encouraged to publish in peer-reviewed venues, and rewarded for doing so (ideally), but for those of us in book-centered fields, we have no way to demonstrate to our institutions that our books are peer reviewed, and what that might mean. I have published 6 academic books, and each has had quite different peer review processes – how do I convey that to my Dean or colleagues, and make it clear that each process had distinct rationale for why it was the model used? The same applies to book chapters and journal articles. The system Mark is proposing is not to normalize or commodify such processes, but just to provide a mechanism to indicate what they are transparently.

– I wish more people engaged with the point of this project—to live up to the Tempe Principle to make the nature of peer review known to readers—rather than highlighting the dozens of problems that it doesn’t solve. In this way, it reminds me of many peer reviews I have read, where reviewers wonder why the author did not write the essay/book that they would have…

– Why do people read “symbol” and “icon” (Mark’s words) and immediately call it an emoji or badge? It saddens me that people committed to developing & sharing knowledge are so dismissive of visual communication as a useful tool and mode of expression.

Jason: thank you for your very honest and direct comments here. I have known about this project since about March 2017 and have been vocal in not wanting to see, at the very least, icons that signify what sorts of peer review protocols any particular publication might have undergone, but I’ll set that resistance aside for the moment to genuinely listen to you and to absorb your points with sincere reflection. I also want to apologize here, to everyone, Mark included, for the tone of my initial comments here, which was somewhat fueled by a sense of frustration with highly-organized and well-funded reflections upon “transparent” peer review not focusing enough on what author-researchers need (nor on the more radical corners of academia vis-a-vis the reformation of peer review, which has been long ongoing, such as what the Fembot Collective is doing around the open, online journal ada: https://adanewmedia.org/about/, and also what Hybrid Pedagogy is doing around peer review: https://hybridpedagogy.org/in-search-of-the-peer-in-peer-review/), and also because I honestly believe that protocols of peer review may be one of the very few and *rare* cases where we need to allow space for some very idiosyncratic (and not always open) practices, tailored to the needs and mission of variously distinct presses, that are actually aimed at protecting authors from harm and also fostering forms of expert review that will actually improve the quality of any given researcher’s work (and in a collaborative spirit). Nevertheless, as someone who has been active on online academic forums since at least 2004, I have always wanted to believe that, if we really listen to each other, and try to set aside overly-determined prejudices, we can sometimes find common ground. So this is my long-winded way of saying that I am now re-visiting this project with a willingness to hear better what the aims of this project are, while also trying to offer my own thoughts / experience vis-a-vis what the project wants to accomplish (and it is worth noting here that it is admirable that the project has invited public comment & critique at multiple stages: I think this is highly commendable).

To also share my own investments, I am an open-access publisher (punctum books), as well as the editor of an academic journal published by Palgrave (postmedieval), and also a researcher-scholar who runs a press dedicated to the idea that researchers’ wishes and needs are the #1 priority in all of our publication processes (editorial, production-wise, etc.). I take your point that, “The system Mark is proposing is not to normalize or commodify such processes [of peer review], but just to provide a mechanism to indicate what they are transparently.” But it’s not 100% true that “we have no way to demonstrate to our institutions that our books are peer reviewed, and what that might mean.” In the Humanities, “what that might mean” is often just telescopically conveyed by the imprimatur. In other words, publish a book with MIT Press and your reviewing faculty will automatically assume that the highest levels of “best practices” of peer review have been exercised, because of the prestige of the press. Now, those of those who have worked on all sides of academic publishing (such as I have, for many years) know that this could not necessarily be true. There may be some books published by MIT Press that did not have the same rigor of review as some other books also published by MIT Press, and there may even be some awful books published by MIT Press (that had expert peer review) and also some brilliant books that did not really gain anything substantial from peer review. We all know this. But we also know that “MIT Press” on a researcher’s c.v. signifies something very specific to most members of most tenure & promotion committees across the world, who simply would not even worry one bit about the process of peer review at the press. And that’s a problem, too, right? But if that’s a problem, what kind of a problem is this, and should we want the unevenness of peer review to be “transparent,” or not, and why or why not? So it’s not necessarily the case that there is a problem — at least, not in the humanities fields I mainly work within — with conveying to one’s Dean or P&T committee that our work is peer-reviewed. Depending on who we publish with, it’s not even a question. But it certainly *does* become a question when we want to publish with more of an outlier, even an open-access outlier press (such as my own press, or even Lever), so I worry, too, that those of us working to advance the cause of transformative publishing processes are bending a little bit too far into the idea that newer platforms and presses for scholarly communication need to demonstrate, up front, their peer review protcols’s credibility and legitimacy. What’s interesting about the PRT project is that it partners a traditional, well-established, and prestigious press (MIT Press) with an OA start-up (Lever), which at the very least would indicate some very interesting and unique sharing of knowledge and concerns around the question of how, and why, our peer reviewing protocols should be more transparent. For some of us, though, who inhabit what might be described as the more radical corners of, not just academic publishing, but also academia, we want transparency, too, but we want more transparency, not just of how the process actually works and the labor and sorts of labor involved (for this, I applaud, sincerely, the work of the PRT project, excepting the icons — sorry!), but of all the ways in which it decidedly does not work and is in need of critical reform. There are issues, for example, in how “credit” for reviewing is essentially worthless in many disciplines, and we need to have more conversations around that. We also need to talk about equity issues. I tried to convey that a bit too snarkily in my previous comment, but in essence, as many of us who have been in the position (as I have been, for many years) of having to solicit expert reviews, we know all too well that there are serious gender, status/rank, and other imbalances that have adversely affected any one press’s ability to uphold, for example, the AAUP’s guide to the “best practices” of peer review. For example, to be blunt, many of us who solicit reader reports know full well that the more advanced and well-known a scholar is, with very every few exceptions, they will decline all requests to serve as a reviewer. So when we say “expert” review, what does that mean. Who is the “expert”? Why are men less likely to agree to review, and women, more? What does it mean when the best experts in a field are too busy (and arrogantly selfish) to help with reviewing (and what does it also mean that the labor, whoever is doing it, is mainly uncompensated). In what ways do really conscientious acquisition editors and publishers (like me) intervene into the reviewing process to insist on a certain principles being upheld, and how often do we reject reviews that violate these principles? Can we be more transparent about that, and how can we be (the AAUP guide is one commendable step in that direction, but it’s not enough)? These are just a few examples, for me, of how thinking about “transparency” in peer review needs to be as much about pinpointing the inequities and fallacies of “expert” peer review, traditionally understood, as it is about simply revealing more clearly and succinctly what is going on “behind the scenes” of the reviewing process. And let’s be as cautious as possible about what we reveal, and don’t, relative to the needs of authors. For me, their needs (and not the economic bottom line, and not the satisfaction of any one discipline’s expectations for what “counts” as significant research, etc.) are paramount.

I also take your point, Jason, about the comments here mainly addressing what the PRT project *doesn’t* address, versus simply engaging with Mark’s post on its own terms. Fair enough. But this also brings me back to how we define “transparency,” vis-a-vis the peer review process at this or that press. I would encourage the architects of the PRT project to at least think a little bit more about what sort(s) of transparency are important, and to who, and for what reasons. The project *admirably* addresses what has long been a problem: the opacity of the process, which has also created problems of “authority” and “gate-keeping,” but the icons, for example — if they’re really about solving a communication problem with administrators around issues such as tenure and promotion, please consider that they could cause even more problems and even be used *against* authors (I am imagining the Dean who says something like, “crowd review is not expert review,” etc.). So, YES, Jason, to living up to the Tempe Principle, but could this project go further, to really address some of the very broken and inequitable mechanisms at the heart of peer review, which is not just about, “this is how we do/did it,” but “this is how we have done it, and that has been bad, and we’re fixing it in these specific ways.”

I also take your point about not wanting “symbol” or “icon” to immediately shade into “emoji” in derisive ways. But perhaps we also need to think more deeply about the ways in which the processes of peer review, again, are deeply broken and whatever icons we *do* devise, need to reflect that in positive, reparative ways.

Good luck.

Eileen – thanks for the comments. I totally agree that there are many things broken about peer review, and maybe this effort aims to solve a relatively minor issue. (And I too have been involved in pushing for more radical efforts in my field, as a founding board member of MediaCommons, an adopter of open review for one of my books, and co-founded of a journal, [in]Transition, that uses totally open review: http://mediacommons.futureofthebook.org/intransition/contribute ) I do think transparency is one step toward addressing these larger issues, and correcting for inequities can be done better with more daylight.

But one comment of yours highlights the problem that we seek to remedy: MIT Press will be assumed to have a rigorous peer review process, while people will assume worse (or the worst) about a newer press without such a reputation or connection to an established institution (punctum or Lever). Based on my experience as an author, editor, and reviewer, those assumptions are both bogus and inconsistently known throughout the academy. I’ve seen minimal or shoddy review at well-known UPs, and great review at start-up OA presses or commercial publishers. One goal of transparency is to judge reputation based more on actual practices (albeit self-reported) rather than historical or institutional assumptions. Isn’t that a laudable goal, and useful for upending assumed hierarchies & norms?

I’m sorry, but I just do not undersstand how a project that aims at developing CC-like icons to signify peer review can avoid striving for standardization. The CC designations do just that: there is language associated with each CC designation like BY, ND, NC, etc. that is exactly the same in each instance it is used. Somehow this project wants to have it both ways: uniformity of icons while allowing for great diversity in actual peer-review methods. Nobody has yet explained how to square that circle. In answer to the question of what to tell a P&T review committee, it’s very simple: shop the committee the reports received from the peer reviewers, ideally with names attached if the reviewers are so willing. Those reports can be accompanied by the standard list of questions the publisher asks the reviewers to answer. What can be more transparent and straightforward than that? This can be done while respecting the autonomy of each publisher to use what it considers to be the best review methods for its purposes. And it is independent of whether the publisher is commercial or non-profit. The proof is in the pudding of the actual reviews produced. I worry that this project is seeking an answer to a problem that does not exist.

I don’t mean to beat a dead horse and come across as a bitter, whiny cry baby (Narrator Ron Howard: he is) but this is so frustrating for me for a few reasons. Please indulge me as I hope the description of my experience will lead to some questions and further discussion.

Once Upon a Time…..I’d been thinking about this topic and felt so strongly about it that I started to develop my own ideas which seem more than a little similar to this project and the discussions in this comment section. This is not an accusation as I am not so egotistical to think I was the lone person in the industry to mull over these issues.

I was fortunate that someone believed in my ideas enough to back me in an attempt to build something. And we did. We invested a lot of time, money, and resources making a go of PRE. I attended every conference under the sun giving talks and promoting it with exhibit booths etc. We had an advisory board made up of The Who’s Who of the industry. I will always be sincerely grateful to those people.

Several of the participants listed in the Appendix of the draft report are members of organizations that we’d discussed partnering with. I personally met with dozens of publishers of all shapes and sizes, including (ironically) MIT Press. It’s like raaaaaiiiin on your wedding day……

Thomson-Reuters (now Clarivate) and Aries finally agreed to give us some support. In the end the only publishers who were willing to give it a shot were JBJS (also part of STRIATUS) and the American Diabetes Association. Ultimately PRE was sold to AAAS, we parted ways, and honestly I’ve no idea what happened after that.

So, what happened? I think David nailed it. The big, prestigious publishers felt no need to participate because their brand was enough. Others did not want to open their “black box” for fear of inadvertently exposing reviewer identitities or exposing that their peer review process was not actually as rigorous as they claimed. Most just didn’t want to be first to try something new and took a “wait and see” attitude. Regardless of the fact that this was clearly an inspired, dare I say genius idea, we literally could not give our services away (Ron Howard: He’s not kidding). Cowards, all of them.

Here’s the thing, technically PRE worked. It provided flexibility that allowed a variety of levels of transparency. We figured out how to address privacy concerns. We knew how to prevent “badge forgery”as much as possible. We had ideas for a suite of services that would grow out of this first tiny step on the path to verifying peer review processes, studying them, finding standards where we could, and developing a training program. Did we want to monetize these services? (they did) Of course! But we tried to make pricing reasonable and not a barrier to participation.

So here are my questions.

PRE addressed an industry need. Price was not an issue. It would’ve worked. Why didn’t it?

Did PRE fail because I was a terrible spokesman? Don’t answer that….

Was it bad timing? (The author consoles himself by thinking he was just ahead of his time)

Why will the efforts reflected in this draft report succeed now? What’s changed?

When will I be recognized as the ground breaking genius that I am?! (See? There he goes)

Do I have to die in order to be appreciated? Jason Roberts has an award named after him and he’s still alive!

If you’ve made it this far I hope you realize this was primarily tongue in cheek and I’m not a lunatic (he is).

I support any effort that brings peer review into the light in an effort to improve the system for all involved. If I can help in any way, I’d be happy to.

Adam – first off, let me assure you that, as someone in the initial brainstorming session that led to this project, there was no knowledge of PRE as a forerunner. We were coming from the humanities & book side of the publishing/academic world, and it seems like PRE was much more within the science journal world – that’s a pretty big gulf. I’m sure there were people in the room in our “summit” who knew about PRE, but that was long after the draft of the idea was well-developed.

Having just learned about PRE from your comments, it seems like the goals and approach was quite different. Boiling down peer review processes into one numerical PREscore appears to embrace a normative approach, valuing certain modes & practices of peer review over others. The PRT approach is explicitly non-normative: there is no value judgment implied by indicating whether a book was reviewed by 1, 2 or 3 reviewers, via blind vs. open identity, or as a proposal vs. manuscript. Any of these approaches could be worthy & justified depending on the goals of the publication and review process. PRT just aims to make that information available to readers, who are free to judge relative value or norms.

I hope that distinction makes sense and assuages some of your frustrations.

Not really. 😉 We solicited humanities as well as science. It is true we did not focus on books but we’d discussed eventually working on that once we established total domination of the journal world.

PREscore was the initial iteration which evolved into just PRE and our first offering of PREval. I always thought with some work the score could be valuable and added at a future date, but what we found as we worked on it was a more obvious need and solution.

NEED: Concerns about peer review being conducted, if at all & a desire to be more transparent about the process.
SOLUTION: A tool that would “make that information available to readers, who are free to judge relative value or norms.”

See

https://hub.wiley.com/community/exchanges/discover/blog/2014/08/20/advancing-peer-review-a-qa-with-adam-etkin-of-pre

and

https://peerreviewwatch.wordpress.com/2014/04/09/after-the-prwdebate-interview-with-adam-etkin/

Re-reading my comments, a correction and point of clarification.

1. Joe was the one who “nailed it” re: big publishers

2. STRIATUS was the organization that initially backed PRE

I’m late to the party here but at IOP Publishing we have just started labelling our published articles with the type of review that was performed (double or single blind), the number of revisions that were done and whether it went through a plagiarism check (http://ioppublishing.org/peer-review-information-now-available-online-articles/). We have definitions on our site that explains what we mean by the terms we use. To date we’ve had nothing but good feedback on this.

Hi Kim. I sincerely believe efforts such as you describe are a positive step in the right direction. However, here’s the problem. Let’s say information such as you describe comes to be expected by our community of authors, editors, reviewers, librarians etc. Kind of like many of the current indexing services and metrics. Now throw predatory journals and other shady players into the mix. Why should I believe a journal is telling the truth? There needs to be an objective way to verify that the journal is in fact conducting the peer review process it claims.

I totally agree Adam! It’s difficult to provide that verification without jeopardizing the reviewer identities though, and really it needs third party verification. PRE was a great idea (and where we took a lot of inspiration for this project), but ultimately too expensive for us, and if it’s not picked up as an industry standard who is going to believe PRE is telling the truth either?! I don’t have the solution, but I do think the fact we’re having the conversation is the first step in moving towards a more transparent and trustworthy system.

Comments are closed.