Much of the Twittersphere (I can still say this, right?) is alight with discussions on preprints, and a drive to consider open preprint peer review. The trigger was the recent announcement by The Bill and Melinda Gates Foundation of a shift in their OA policies for Gates grantees: they will no longer support article processing charges (APC) and will insist on deposit of the grantee’s work in a preprint server, while continuing to require immediate Green OA should the authors of those preprints choose to publish them in a journal.

In this post I attempt to sift through the goals and incentives in play, leading to thoughts about how we may iteratively move the goalposts on research integrity.

Some of the discourse on preprints has been centered on the idea that journals are no longer necessary as part of the academic ecosystem, and that focusing on preprints is more valuable as an inclusive exercise for all academics, allowing them to share early versions of their articles while tuning them over time.

It is worth reading some thoughtful recent blog posts to come up to speed on this topic, such as Richard Sever’s eloquent recent article in the LSE Impact Blog, entitled “Is the Bill and Melinda Gates Foundation’s New OA Policy the Start of a Shift Towards Preprints“, and recent Scholarly Kitchen posts by Lisa Hinchliffe (“Gates Open Access Policy Increases Compliance Burden and Eliminates Financial Support“) and Ann Michael and Dan Pollack (“Gates Policy Refresh: What Would Success look Like?“).

3d rendering of eyes on stalks looking around against a purple background

There is no question that preprint servers should play a significant role in academic life in some fields. Indeed, in mathematics, computer science, and physics the arXiv has been a central part of life for researchers for some time, with versions of record being published post-preprint in a journal. From my perspective, I look at a preprint as a vital part of a spectrum and believe that there can be a happy and productive coexistence of the two forms of representation.

Where I struggle is with the argument that journals are no longer necessary. If the goal is to dismantle corporate publishing and its significant profits in favor of an equitable and inclusive approach to preprints, I see the logic, though it is worth pointing out that not all publishers generate the kind of profits that pose such a problem. Indeed, while these players certainly exist, many smaller operations — such as small to medium sized societies or university presses — operate very differently. However, I am not convinced by arguments that dismantling journals will alleviate current issues with research integrity and peer review.

Of course, at this juncture, I need to be transparent about my own role: I am a publisher at the American Mathematical Society (AMS), a scholarly society with some 30,000 members ranging from students to full professors across the world. I exist in the weird and wonderful world of mathematics, a field made up of many small sub-fields. Generally speaking, mathematics is not well funded: many eminent mathematicians are salaried, teach in four-year colleges, and do not receive direct funding from federal agencies such as the National Science Foundation (NSF). Those that are funded also do not have teams of postdocs; rather, they rely on a blackboard, books, and a computer. Grant monies involved are tiny by comparison to, say, research in the biomedical sciences. Indeed, this is why our response to going open access in recent years was to shy away from charging APCs, favoring zero-embargo Green OA and Diamond OA for all our journals to ensure inclusivity and equity for both authorship and access. We still sell subscriptions, modestly priced, and these are well received by our library customers, as long as we are providing a path to open. We also see a culture within mathematics of authors depositing their final manuscripts in arXiv, which has become an accepted practice. Our approach allows us to fund the many ways in which we help support the mathematical community, be it through travel grants, summer schools, subsidized conference attendance, prizes – the list goes on. To me, in this context, journals are as natural a part of a supportive research ecosystem – as preprint servers – and I believe they happily coexist.

Where I think we can do better is in addressing research integrity. Here I do not think the responsibility lies solely with the journal, nor the preprint server. Yes, as we have seen so clearly with so many fraudulent journal practices leading to mass article retractions, there are bad actors out there in the publishing world. Yes, I think an APC-driven article economy contributes mightily to the chance of more fraudulent behavior in the rush to publish more stuff – not just quality stuff. In the end, though, we have to change the mindset of authors, funders, and institutions on how a researcher’s success may be measured. We are not going to do this overnight, but perhaps we can slowly shift the current system by supporting efforts to deemphasize publication records, citation impact, and the impact factor of the journal in which a researcher publishes. Instead, we can focus on peer review, regardless of whether this means a traditional approach to double anonymous peer review or open peer review – the form of peer review is by the by. Where I think we can make progress is in shifting career success metrics; for example, tenure committees can consider a researcher’s peer review record and how engaged a researcher is with peer review, remembering not to focus just on peer review for high impact journals as a part of this measure. This reinforces the culture of rigor that all research settings should seek to achieve.

I would argue that private funders such as the Gates Foundation or the Howard Hughes Medical Institute (HHMI) could provide material support through grants and policies for quality peer review, baking peer review into selection of grantees. Such an approach will require careful structures and mechanisms for reviewer selection, and measures of success, or we may run the risk of creating further inequities. Mind you, in many fields it is just hard to find good reviewers prepared to put in the effort required for a considered, thoughtful review. Societies, such as my own, could also consider material ways to support peer review more actively – a philosophical and practical approach to raising the profile of peer review at an early stage in the life of a researcher.

In fact, what I am getting at here is the idea that peer review is as important a part of what a researcher does as producing results. All of this should really be considered in context of the field. In mathematics, an article is as much about writing an elegant mathematical proof as it is about a result. In a sense, the intellectual property here lies as much in how the idea is expressed on the page as it does in the result. If we change the mindset of researchers to focus on peer review as being just as important as producing the next big result, we have a chance of weaving research integrity into the fabric of what it means to do research. Perhaps tenure and promotion committees would sit up and take notice of the importance of peer review alongside publication in career assessment.

Your reaction to my thoughts may be “how naive”, or “lovely ideals, but how?”, or “why be so gently iterative?” My response would be that we have a duty to our fields’ cultures to develop appropriate tools that stress research integrity as fundamental to career progress and the understanding that cheating the system does not help you as you navigate your academic life.

Robert Harington

Robert Harington

Robert Harington is Chief Publishing Officer at the American Mathematical Society (AMS). Robert has the overall responsibility for publishing at the AMS, including books, journals and electronic products.

Discussion

5 Thoughts on "Preprints, Journals and Openness: Disentangling Goals and Incentives"

As usual, Robert, thoughtful, insightful, and calm as we have come to expect of you. The big picture of the various types of OA, the existential value of journals, bad actors, and heavyweight funders is a destabilized one reminiscent of Robert Maxwell’s entry into scholarly journal publishing in the immediate post-World War II era. On the other hand, at least in the social sciences and humanities (and mathematics), it may well be that countless small society and other scholar-run journals, countless tenure and promotion committees, and the many small scholarly publishing efforts in all languages recognize, at least tacitly, the research integrity of which you speak. Within their disciplinary worlds, many scholars are relatively oblivious to the big picture debates on scholarly publishing. Often, they and their colleagues recognize smaller journals as serving their community. They know of the editor, they know which journals their colleagues value, they understand the value peer review even though it may be off the mark, and they appreciate the limited opportunity for dialogue with the journal. And then… amidst career concerns and a daily deluge of invitational advertising by “global publishers,” and facing (fair-minded or not) substantial revisions to a submission, they take the plunge into a world defined by performance metrics that is as addictive as cell phones are to teenagers.

Perhaps, at least as an antidote, the scholarly journal community might want to join with scholarly societies to develop and encourage a focus on research integrity by developing a model framework that evaluation (including tenure and promotion) committees could use in their deliberations.

Excellent article, with sound arguments. I fully agree with two basic principles:
– peer review is the pillar of quality scientific publication;
– the APC model is intrinsically flawed and cannot be repaired.

Thank you, Robert, for this post. In Sweden and around the EU, people have been working on changing the merit system.
https://www.su.se/english/news/new-policy-for-open-science-at-stockholm-university-1.629932
They have formed groups with researchers, funders, university libraries and provosts to understand the problem fundamentally and find/test alternative solutions. At the end of the day, all we have is our integrity; working together in scholarly comms will ensure that the changes made will benefit both ourselves and the wider public. Looking forward to a better future in scholarly comms. 🙂

Thank you for your thoughtful and well-meaning post, Robert. I agree with the proposition that incentives around peer review need to be developed and such a development may provide us a way to tackle research integrity issues. In the same breath, I also think evolving incentives (as well as monetizing) for peer review might unintentionally help breed “peer review mills” (akin to paper mills) to churn out peer reviews for hiring, promotion, funding, etc.

How much of a risk would you consider this possibility to be?

Leave a Comment