At the STM Association Annual Meeting in “virtual Frankfurt” last week, much of the focus was on how scholarly publishers are responding to the COVID crisis. Publishing executives reported how they have accelerated their editorial and peer review processes for COVID submissions, rightly taking pride in the contributions they have made to fighting the pandemic. They also emphasized again and again that they want to be more trusted. This is a formidable challenge in light of some recent failures. To achieve their objectives, publishers need to become more comfortable talking about their mistakes to prove convincingly that they are learning from them.

Drawing of a plague scene
Francesco I d’Este Provides for His People During the Great Plague of 1630, The Metropolitan Museum of Art

Let’s be clear that at the highest level scientific publishers — commercial and not-for-profit alike — have much to be proud of this year. Journals have implemented dramatic changes to their editorial and peer review processes to speed up time to publication for COVID related materials. Annette Flanagin shared that JAMA is utilizing a differential peer review process, with greater scrutiny for articles that, if published, would be “likely to influence clinical practice or public health.” And, with the volume of submissions up substantially, publishers have collectively scaled up their throughput, increasing the number of articles published by 11% at Springer Nature, according to CEO Frank Vrancken Peeters. All this is impressive, especially in light of the disruptions that publishers (like so many enterprises) have faced themselves during the pandemic.

To speed up and scale up journal publishing, not to mention other parts of their businesses, scholarly publishers have clearly made investments, at least selectively. Elsevier CEO Kumsal Bayazit reported adding resources to editorial and production functions, not least of which was developing artificial intelligence systems to speed up COVID paper processing. A publisher with a substantial share of its revenues sourced from article processing charges (APCs) might find this to be a profitable opportunity, but others might be unable to offset the added expenses. Accelerated review while maintaining a high degree of rigor cannot be achieved without significant expense. Within the context of other businesses and sectors making major cuts, these investments and the organizations behind them deserve to be celebrated.

At the same time, I would encourage publishers to balance their celebrations with self-reflection. Scholarly publishers wish to see themselves as stewards of the scholarly record and of the transition to open science. To do so in a way that is compelling to all stakeholders, they must continuously increase the quality and rigor of their work, probe their processes for weaknesses, and make their work ever more resilient against potential points of failure. 

For all the contributions that publishers have made this year, the record is not unblemished. According to Retraction Watch, more than 35 COVID papers have been retracted or withdrawn, including more than 20 scholarly journal articles and others posted as preprints. To be sure, this is a very small percentage of the many thousands of COVID papers released this year. Still, it is a serious matter. Among the withdrawn papers were several published in top-tier journals, including the infamous cases of research published in The Lancet and the New England Journal of Medicine that had to be withdrawn because best practices around data availability and reuse were neither followed nor enforced. These withdrawn articles were among those that received intense public and media interest. They have contributed to the swirl of public confusion and politicization of science that have made a public health crisis worse than it might have been. 

While editors of an individual journal may learn lessons the hard way, it seems to be less common for publishers to fix these problems for their entire portfolio, let alone for them to be addressed on a sector-wide basis. Certainly, none of the executives speaking last week “at Frankfurt” addressed the matter. One CEO later agreed that there is  “lots more we can and should do to reinvent,” but when the question was raised during the session it was difficult to avoid the perception that others were more of a mind to minimize the problems (“a few cases where we have dropped the ball“) and change the subject.

Yes, this is an unprecedented moment, but it is also a good one to look at the systemic dynamics. Indeed, in the wake of any single retraction, publishing houses face near-term incentives that are similar to those of an airline after a disaster: rather than looking for the root causes of the problem, and investing to fix them across a portfolio, it is in their near term interest to address the issue narrowly, perhaps sometimes assign blame elsewhere, and try to move forward as quickly as possible. But in the long-run, avoiding scrutiny is good neither for airline safety nor for the industry as a whole. As a result, the federal government created the National Transportation Safety Board (NTSB), to serve as a third-party that investigates airline accidents, assigning responsibility and recommending future corrective actions fearlessly. Perhaps the scholarly publishing sector needs an NTSB-like partner to conduct regular “post-mortems” so that the sector as a whole can learn lessons from each incident, address shortcomings pro-actively, avoid repeating the same mistakes again, and establish confidence that editorial practices and trustworthiness are continuously improving.

Today, the scholarly publishing sector looks to reestablish itself as a steward of the scholarly record and a trusted party to lead the transition to open science, and we need it in this type of role more than ever. Being entrusted with this role requires that publishers identify problems honestly and with humility, since trust is earned, or squandered, at a sector-wide level. The sector does not need triumphalism from leaders that enables their organizations to downplay festering problems. And, it does not need its boosters to selectively amplify concerns with preprints — when publishers should focus on their own shortcomings. The sector needs not only to ask for trust but also to make sure that it is continuously earning it every day.

Roger C. Schonfeld

Roger C. Schonfeld

Roger C. Schonfeld is the vice president of organizational strategy for ITHAKA and of Ithaka S+R’s libraries, scholarly communication, and museums program. Roger leads a team of subject matter and methodological experts and analysts who conduct research and provide advisory services to drive evidence-based innovation and leadership among libraries, publishers, and museums to foster research, learning, and preservation. He serves as a Board Member for the Center for Research Libraries. Previously, Roger was a research associate at The Andrew W. Mellon Foundation.

Discussion

10 Thoughts on "Are Publishers Learning from Their Mistakes?"

“And, it does not need its boosters to selectively amplify concerns with preprints — when publishers should focus on their own shortcomings.”

Since every link in this sentence points to my paid e-newsletter (https://thegeyser.substack.com/), allow me to take this one a little pointedly. The trends in publishing and preprints have strong parallels — an embrace of quantity over quality, expanded editorial goalposts, and a diminution of standards for the sake of embracing an increasingly imposed business model tilted toward producers, not consumers. I believe redesigning preprint servers so they are safe at any speed and serve a useful purpose is critical to improving the overall state of the sector, as you urge.

We can fix a lot more if we don’t draw artificial lines around trust and stewardship. Preprints are part of the landscape, and they repeatedly fail to engender trust, reflect careful stewardship, or live up to our ideals.

Have publishers learned lessons? Let’s hope so. One is that not everything should be public. Another is that the information landscape is waiting to exploit sloppiness, naïveté, and mistakes, in ways that can cause science to be warped or misrepresented.

So let’s talk about it all, and not draw artificial lines around certainly ideological pet projects. I think the sentence I started this comment with is an obvious attempt to shield preprints from legitimate criticisms, when you’re also saying the entire sector needs to step up its game. So, which is it? Are they on the table? Or are they in some magical protected zone, immune to critique and improvement?

Preprints are absolutely on the table. I think there is legitimate critique with preprints, and I’ve certainly addressed some of it on Twitter and elsewhere during the COVID crisis. I would be thrilled for my “NTSB-like” proposal to be extended to preprints and other research artifacts. Perhaps someone — maybe even you or I or even the two of us together — should co-author something about this, about how to make the entire system more resilient against malfeasance and other risks.

Preprint servers have three broad problems — their design, their financing, and their mission. I’ve proposed solutions to improving and refining all three so preprints can remain useful and modernized while minimizing downsides and vulnerabilities. Until such deficits are practically addressed, preprint servers will remain open to manipulation and exploitation. The most recent incident — a Covid-19 misinformation preprint funded by a group associated with Steve Bannon and amplified by Fox News and others — shows how serious such abuses have become.

One interesting finding is that preprints are exploited by fringe groups because they are fringe literature. Papers published by established publishers are covered by the mainstream media, and this combination seems to insulate them from exploitation. Mainstream journalists, who attempted to cover Covid-19 preprints and were repeatedly burned, are now covering preprints even less than before, leaving them open for exploitation in abstentia. This is another mark in the long-term demerits column for preprints, I think. They had their chance in the sun, and they blew it. We need to be realistic about the risk:benefit. I’ve analyzed tens of thousands of preprints, and they are hardly ever used directly, comments are few and far between, and the main utility seems to be something a smaller hub-and-spoke model of trusted peers could accomplish. But with open servers and unmanaged archives, preprints are being exploited to launch misinformation campaigns around Covid-19, vaccinations, white supremacy, and general political issues (voting, etc.).

Let’s also remind ourselves that preprint servers may not even need to exist at all — that’s a possibility we should also leave on the table. Not every idea is a good idea, and an option is always to stop what you’re doing if it’s not working out.

You know how to reach me if you want to discuss this seriously. In the meantime, I’m glad you agree that this area needs to be included in how to increase trust and reliability. It is currently a very vulnerable point in a system that we should make stronger and more resistant to exploitation by commercial, ideological, or political interests. When an industry is being warned by the FBI that its main outputs are being exploited by foreign actors, that should be another Bannon-level wakeup call.

It’s interesting that you are writing long paragraphs going after preprint services again. Perhaps you should respond to the principal claims of my piece, which are after all about the flaws of traditional publication processes.

I am glad in your last paragraph that you bring up the FBI/CISA warning. When we discussed this on Twitter, you might remember that I argued that this wakeup call might even apply to prestigious journals, a point you subsequently acknowledged, though to my memory without any attribution, in your Geyser piece.

In any event, if you would like to have a “serious” conversation, which would necessarily include the risks and the issues that present themselves across formats and processes and not just selectively amplify concerns with preprints, don’t hesitate to let me know.

You are attributing a motive to my writings on preprints, yet again. You did this a couple of weeks ago, as well, some people pointed out to me. So I think I’m sensing a pattern.

As for preprint servers, I don’t examine them to “go after” them. My first goal was and is to understand what is going on with them — they’re new, so what are the risks, benefits, and potential improvements. It’s purely about understanding them. As part of this, I have found things to critique, and I write these out so that managers of preprint initiatives can start to realize there are problems to solve with the current approach. There’s much more to examine and worry about than I ever expected when I first started analyzing preprints, and the problems seem to be getting worse at time goes by and no remediation occurs. In this spirit, I’ve offered a number of concrete design, funding, and scope alternatives for preprint server operators to consider. My newsletter is read by some of the leaders in preprints so they can learn from the downsides and potential fixes I identify. You interpret these critiques as attacks. Others read them as well-researched or timely warnings, fair critiques, and I hope indications of what can be approved, eliminated, or reconsidered around these new and untested initiatives.

If you felt slighted by not being mentioned in my post about the FBI/CISA because we had an exchange on Twitter, please realize you were not the sole source of input, exchange, or information on this for me, not by a long shot. For the sake of comparison, I did cite my outreach to the FBI offices in Boston and DC, because they were the only source for some of the information I obtained from them.

As you know as a reader of “The Geyser,” and previously of “The Scholarly Kitchen,” I’ve exhorted publishers of all types to return to the norms of truth-seeking organizations and beware the pitfalls of a dynamic, untrustworthy, and even antagonistic information ecosystem based on Silicon Valley ideas and theories of information. So, as for this post, I’d just like to say, in the immortal words of John McClane, “Welcome to the party, pal.”

I only wrote today’s comments focused on preprints to unpack what appeared to be an arch comment in your post dismissing my fair, well-researched, and sincere concerns, via a trail of links. I thought that merited further elaboration for the sake of the readers of the Kitchen. I hope what I wrote will help them understand that I’m not selective but diligent, that the concerns are real, and that any amplification is probably justified given the gravity of the situation when it comes to misinformation and political/commercial exploitation of scientific information. The stakes around accurate information are feeling pretty high these days. On that, I think we can agree.

You mentioned “festering” problems, which is defined by persistence. Has nothing changed since Retraction Watch started? Here I was feeling pretty good that these withdrawals and retractions (Retraction Watch chooses not do distinguish between the two) happened relatively quickly given the complex nature of a “novel” virus. I guess even better peer review would have prevented some of these (maybe not the letter that is included in the list of articles), but without repeating the science can peer review really fix this?

Nice piece, and I love the idea about an NTSB for publishing. To Kent, I enjoy reading your research on pre-prints. I find them to be very informative, and incredibly useful considering the rapid growth in preprints. Quite frankly, the number of these I see the media highlight (especially related to the pandemic) is very concerning. Keep it up please.

Let’s talk about that NTSB — how would it be funded and who would run it? If it’s an industry group funded by publishers, would it be hindered from taking punitive action against those same publishers and end up largely toothless? If it’s a government run entity, how would one avoid it being co-opted for political agendas (would you trust the current US administration to police the literature on COVID or Climate Change for example)?

An important discussion, no doubt. Time is needed to complete this assessment, and perhaps the earliest that one can say something definitive would be once a vaccine has been used by one or more populations around the globe. Until then, all conclusions drawn are extremely premature, in my opinion. Some of my views and experiences on this here:

Silently withdrawn or retracted preprints related to Covid-19 are a scholarly threat and a potential public health risk: theoretical arguments and suggested recommendations. Online Information Review
https://www.emerald.com/insight/content/doi/10.1108/OIR-08-2020-0371/full/html

Publishing volumes in major databases related to Covid-19. Scientometrics
https://link.springer.com/article/10.1007/s11192-020-03675-3

“Questionable” peer review in the publishing pandemic during the time of Covid-19: Implications for policy makers and stakeholders. Croatian Medical Journal 61(3): 300-301.
http://www.cmj.hr/2020/61/3/32643350.htm

An alert to COVID-19 literature in predatory publishing venues. The Journal of Academic Librarianship 46(5): 102187.
https://www.sciencedirect.com/science/article/pii/S0099133320300902

Comments are closed.