A week ago, Richard Poynder, a well-known and widely respected observer of the scholarly communication ecosystem whose blog Open and Shut? is generally considered a must-read source on the topic, published an extensive commentary on the current state and future prospects of both open access (OA) and the open access movement. Titled “Open Access: Could Defeat Be Snatched from the Jaws of Victory?,” it is an important contribution to the ongoing discussion of the future of scholarly communication.

Before I proceed to summarize and respond to some of the points he makes in this wide-ranging and frankly magisterial document, I should point out that the distinction I’ve made above — that between OA itself and the OA movement — is important. Both the constellation of OA publishing models and the global social movement that seeks to promote them are complex and multifaceted, and the strengths and weaknesses of one are not necessarily commensurable with those of the other. The importance of this distinction frequently becomes obvious both in Poynder’s paper and, I hope, in my response here.

Abstract Technology background.Security concept with padlock icon

Two Fundamental Points

As I understand it, Poynder is making two fundamental points in his analysis, each of which is summed up conveniently in a sentence that can be quoted directly:


We have re-discovered the truth that there is no such thing as a free lunch. Providing free content and services inevitably requires some form of revenue from somewhere.

Interestingly, this is not actually a controversial statement; when the issue of inevitable costs is raised with OA advocates, the response is usually irritable impatience: “Of course OA isn’t free; we get that.” But according to Poynder, in the early years of the movement, “OA advocates gave little or no thought to how the free online content and services they were demanding would be funded.” He provides a number of examples of early advocacy rhetoric that exposes this lack of consideration for the problem of cost allocation, or that suggest a fatal naïveté as to the real costs of providing publishing services — a naïveté that continues to be demonstrated whenever someone characterizes the work of publishers as “taking content for free and selling it back to us at an enormous profit.” Serious open publishing initiatives from arXiv to PLOS to Knowledge Unlatched have clearly shown the fundamental unseriousness of such characterizations; systems always look simpler and cheaper from the outside than they do from the inside, and scholarly publishing is no exception to that rule.

Poynder’s second fundamental point is the one on which he spends the bulk of his time, and it is also by far the more controversial:

We have learned that openness is by no means an unmitigated good.

This is the statement that will likely be met with the most consternation by many in the OA and open scholarship movements. But Poynder supports his assertion extensively, evoking, among other concerns, the issue of what has come to be called “surveillance capitalism” — a phenomenon that thrives (and can only thrive) in an environment of free and open information sharing. As Poynder points out, “the Internet’s foundational ethos of free sharing led web companies to devise business models that are now seen as both deceptive and predatory.”

To be clear, he’s not talking about predatory/deceptive journals here; he’s talking about Facebook and Google, which operate on models under which “the user has become the product.” Of course, it’s important to point out that the “user as product” phenomenon has existed for as long as there have been newspapers and magazines — not to mention free TV broadcasting — which relied on a business model that lured consumers with artificially cheap access to content, thus corralling their eyes on behalf of advertisers whose payments underwrote the bulk of the publishing or broadcasting costs. The major difference between “surveillance capitalism” as it exists now and as it existed in the print-and-broadcast era is one of both scale and effectiveness: in the networked online era, harvesting useful data about information consumers is much, much easier and more efficient than it ever was before — thanks entirely to the radical openness of the Internet.

Unintended Consequences and Unexpected Outcomes

In broader terms, though, when Poynder says that “openness is by no means an unmitigated good,” he is referring both to the inevitable emergence of unintended consequences, some of which will necessarily be negative, and the inevitable failure of some intended consequences to be realized. Among the outcomes that he believes OA advocates have failed to anticipate are these:

  1. The fact that an open and online world creates new tasks and costs in addition to obviating old tasks and costs.
  2. The ability of legacy publishers to adapt to the new environment in ways that would “allow them to maintain their power and… to increase it.”
  3. The reluctance of researchers to embrace Green OA. (In this respect, Poynder argues in particular that “physicists were not typical,” in that they flocked to embrace online distribution of preprints, which itself was the natural extension of long-established print-based practices in the discipline. Though even here, preprint distribution falls well short of most definitions of either Gold or Green OA, and of course does not involve the institutional repositories that have generally been the movement’s preferred locus of Green OA deposit.)
  4. The continued attachment of both researchers and their host institutions to off-the-shelf evaluation tools such as the Impact Factor (IF). (In this regard, Poynder shares a notable statistic: “over 90% of Berkeley faculty still consider high Impact Factor an extremely important criterion when determining where to publish.” And here one has to wonder how these faculty feel about the University of California’s recent decision to cancel access to some of the most high-IF titles in their disciplines. Whether this attachment to the IF is wise is an important, but obviously very different question.)
  5. The potential for damage caused by unrefereed preprints, which can (for example) be posted by drug companies in the knowledge that journalists and others will likely cite them as having been “published” — the thoroughly debunked but widely cited cellphones-and-brain-cancer study (still available in bioRxiv*) being just one of the more egregious examples of this danger. (It’s important to note here that preprints also pose the potential for significant public and scholarly benefit.)
  6. The possibility that “geowalls” will take the place of “paywalls.” (This would be a natural, though unintended, consequence of the popular argument that “taxpayers deserve access to scholarship arising from the research they have funded.”)
  7. The further danger that ideological, social, and economic conflicts between nations could lead to the splintering of the Internet itself, with countries walling their citizens off from the larger world (and walling certain elements of the world out, as well). We see what could be the beginning of a global trend with China’s infamous “Great Firewall,” which some have characterized as the “world’s biggest non-tariff trade barrier.” It would be easy to assume that this can’t happen; that the Internet will (because it inevitably must) remain both open and global. And yet it already is substantially less than globally free and open; many millions of the world’s citizens are actively restricted from accessing it freely — and more than 50% of people (mostly in the Global South) still have no access at all. The argument from inevitability does not have a particularly distinguished career of success in human history.

A Tipping Point? Maybe, and Maybe Not

Despite all of the above issues, however, Poynder argues that two more recent developments have convinced OA advocates “that a tipping point has been reached and the war won.” The first of these is librarians’ growing enthusiasm for transformative agreements (as witnessed, for example, in Elsevier’s abortive negotiations with the University of California system and its successful negotiation with Carnegie-Mellon University), and the second is the slowly-growing trend of governments and funders creating “ever more coercive mandates to compel researchers to embrace OA.”

Poynder, however, suspects that the “tipping point” is illusory, and he sees a number of reasons why the battle for OA could end up being lost rather than won. He outlines several serious challenges to the ultimate success of universal OA. He does so not for the purpose of discouraging its advocates, but rather to help them “anticipate potential problems and try to mitigate them.” Some of the potential problems he outlines are organic (originating from within OA itself, in its various manifestations), and some are external (originating in other social and geopolitical dynamics). They include:

  1. Pushback/Counterrevolution. While many advocates see open access as a moral imperative, government funders tend to see it in terms of potential financial benefit. But if OA fails to yield the anticipated economic benefits, government support could erode quickly. (It also bears pointing out that the moral-imperative argument for OA is hardly universally accepted, even within the scholarly community.) And, of course, OA journals can be flipped back from open to closed in response to pushback from authors or funders — this has happened already in a notable number of cases. Privacy concerns, worries about the mismanagement or nefarious application of free information (by poachers and traders in human remains, for example), the potential for open access and open data to contribute to dangerously uncontrolled development of artificial intelligence, and concerns on the part of researchers about others exploiting data they have labored to generate may also create pushback. Concerns are also already arising about the redirection of research money to support the free provision of access to content.
  2. Populism/Nationalism. In recent years we have seen growing attacks on academic freedom from governments throughout the world — including in the US, where the Trump administration has taken an aggressive approach to discouraging research into topics such as climate change, while actively seeking to expunge research data on such topics from the public record. At the same time, we are also seeing an increasing tendency of democratic processes to yield illiberal electoral outcomes. In this context, there is growing concern that making access to research results more broadly available does not necessarily increase the public’s understanding of those results, nor does it necessarily inoculate the public from the predations of scientific hucksters.
  3. Economic Protectionism. Trends toward economic isolationism and rivalries between powers both great (China vs. the US) and somewhat-less-great (North and South Korea; Saudi Arabia and Yemen; Iran and Iraq) are likely to lead not to more global openness and collaboration, but rather in the opposite direction. OA is, by its fundamental nature, international, but blockades on currency exchange, for example, make it impossible for authors in some countries to pay APCs to foreign journals; similarly, scholarly exchange between scientists in some countries is restricted by government policy (this was true in the US under President Obama, and is even more the case now, under President Trump). Furthermore, there is no question that the free sharing of information, both for reading and for reuse, will inevitably have uneven effects in the world, offering relatively less benefit to the most scientifically and technologically advanced countries and relatively more benefit to less-developed countries. While many of us might argue that this sounds more like a feature than a bug, it may seem just the opposite to those in positions of power in an advanced country whose job it is to help that country stay dominant. Poynder points out the potential implications of this dynamic for (to take just one example) the US National Institutes of Health—the “largest public funder of biomedical research in the world.” At the same time, of course, the massive and systematic cyber theft of intellectual property is a growing international problem, and geopolitical competition and conflict — notably between China and the US — make resolving that problem harder rather than easier.
  4. Naïveté. Poynder argues that a seemingly willful ignorance of the potential for unintended consequences has been a hallmark of the OA movement since the beginning; indeed, even today (as many of those consequences are becoming painfully apparent) attempts at discussing them are regularly dismissed in the community as “fear-mongering.” Such consequences include:
    • the APC model leading to the problem of predatory journal publishing;
    • OA mandates leading to a backlash among researchers;
    • insistence on Creative Commons licensing creating not only resistance among authors but also unintended IP consequences of its own;
    • the potential for OA to lead to decreased funding for libraries.

In one telling example, Poynder reports that this very naïveté about open science and scholarship practices in general has led the FBI to travel to research universities to brief their administrators on best practices regarding information security.

Poynder devotes many pages in his report to an extensive analysis of current global ideological divides and great-power struggles (particularly between China and the United States), most of which is interesting but much of which seems less centrally related to issues of OA and scholarly communication; I let it pass without comment here not because it’s uninteresting or irrelevant, but mainly because I don’t want this response to be as long as the document to which I’m responding. For our purposes, suffice it to say that Poynder sees significant negative implications of these ideological and geopolitical struggles for the future of openness in science, scholarship, and scholarly communication, and I believe he makes a compelling argument for that position. Two of his observations in particular seem especially worth quoting: “The future of scholarly publishing will surely depend to a great extent on what China does — not least because it is now the second largest publisher of research papers in the world and expected soon to overtake the US as the world’s top economy… It seems logical to ask whether China’s interest in OA demonstrates a commitment to openness or simply a desire to have access to research produced in other countries.”


Richard Poynder’s full 84-page document is well worth the time and energy required to read and digest it. Not only does he offer sharp and often trenchant analysis of the state of open access itself, but he also provides wide-ranging geopolitical and economic context for his analysis of the current state and possible future(s) of OA, and of the movement that is dedicated to promoting it.


*Disclosure: I serve on the advisory board of bioRxiv.

Rick Anderson

Rick Anderson

Rick Anderson is University Librarian at Brigham Young University. He has worked previously as a bibliographer for YBP, Inc., as Head Acquisitions Librarian for the University of North Carolina, Greensboro, as Director of Resource Acquisition at the University of Nevada, Reno, and as Associate Dean for Collections & Scholarly Communication at the University of Utah.


33 Thoughts on "The Tyranny of Unintended Consequences: Richard Poynder on Open Access and the Open Access Movement"

I take up here the concerns expressed about preprints in the com box here:
Suffice it to say that I haven’t heard convincing arguments that preprints are an inappropriate venue for OA publishing of research that bears on biomedicine. Certainly we should be unhappy about the potentials for misuse of information by the public, but doesn’t that become an argument for censoring the publication of popular books or newspaper or magazine articles about the same topics covered in preprints about the same topics, on the basis that the public will become mislead? The potential for the public to be duped by bad science presented in these venues is at least as great as is the case with respect to preprints, and in fact surely far more laypeople read the latter formats than preprints. Suitable disclaimers attached to preprints could obviate a lot of problems, too. All that said, I’m open-minded on the issues and interested in counterarguments.
The problems that people point out with the public being mislead, however, diminish significantly with other types of research. Take physics.

Let’s not confuse a brief mention of the downsides of preprint servers with a call for them to be censored.

I’m merely pointing out the logical entailments of a certain viewpoint so I stand by my comment. I’m of course not saying that advocates of this viewpoint agree with that entailment, just that they should contend with it.

I’m merely pointing out the logical entailments of a certain viewpoint so I stand by my comment. I’m of course not saying that advocates of this viewpoint agree with that entailment, just that they should contend with it.

And I guess I’m disagreeing with you that discussing a downside of preprint servers logically entails calling for them to be censored. One may recognize and observe that a practice or service offers costs as well as benefits without implying that the service should be limited or curtailed.

I’ve used up enough bandwith here so a final note and more interested in hearing views of others. We’re probably more in agreement than might appear. Preprints have a downside (but so does any form of scientific communication.) I’ve taken the arguments of certain preprint critics to pretty clearly discourage their use or even existence in certain subject domains, though it remains unclear how far they’d be willing to go in that direction. Mr. Anderson calls for “higher standards”. What exactly does that entail, practically speaking? Shutting down the preprint servers? Professional sanctions on scientists who use them? Which scientists–just biomed scientists, or does this apply to engineering or physics domains, e.g.? Ingelfinger’s rule widely observed among editors?
I’m actually quite open to changing my views on use of preprints in the biomedical space, but I need to see arguments that go beyond reference to potential misuse by journalists. (A good deal of journalism is irresponsible; it’s been that way for centuries.)
Again, any preprint that has anything to say that bears on human well-being should have appropriate boilerplate that is very easy to understand, to obviate dangers. It would be great if any preprint were to be stamped with easy to read language along the lines of: “this research has not been subjected to peer review” followed by an easy to understand explanation of what peer review is. It becomes a teaching moment for the public about the meaning of peer review. Sadly, a lot of journalists don’t understand it, either.
My view of preprints is that they serve the same purpose as epistolary/letter formats did in an earlier day, namely a place for disclosure of new ideas in piecemeal fashion. Peer-reviewed journals provide an integrative role, and an absolutely crucial and critical one.
Information literacy is indeed a really important role of librarians, something we incorporate into our training of students about how to do research. We do our part; wish the journalists did their part.

So many things I can’t agree with here — if I haven’t heard of problems, they don’t exist; scholarly standards of quality and rigor are modeled upon public standards of quality and rigor, and don’t represent higher standards or special standards unique to our pursuits; and, disclaimers excuse bad information and negligent distribution.

If you want to read about some of the problems preprints have caused in the press: https://thegeyser.substack.com/p/biorxiv-preprints-in-the-press

If you want to read about some of the issues with bioRxiv in particular: https://onlinelibrary.wiley.com/doi/full/10.1002/leap.1265

The social sciences are also a hotbed of misleading preprints, with the most cited preprint in Wikipedia representing science never peer-reviewed or accepted by a journal editor, and news reports based on preprints occurring regularly.

I hope our standards are higher than that.

My view is that misinterpretations of preprints are primarily the fault of readers and journalists. It’s true that none of us should create proximate occasions for wrongdoing by others, in any sphere of life. To some extent this problem can be obviated with appropriate boilerplate of the kind I mentioned in the com box posting about my brief discussion with the head of the NLM.
Any format is rife with dangers of this kind: books, blogsites, journal articles, emails, poster sessions, and yes preprints.
Instances of the misuse of any format of communication are an argument for better science journalism, not an argument doing away with a format.

Designing preprint servers so that they actually do what preprints are intended to do — give authors the ability to get pre-publication feedback from trusted peers — would entail a non-public approach, qualification of users, feedback mechanisms, and temporary identifiers with removal after a reasonable period. The fact that they are not designed for purpose and instead presented as open platforms the public can access, where preprints can reside for years (and maybe even in perpetuity), and with no accountability for retraction or removal? I think we can do better, and have different/higher standards about what reaches the public.

I fully agree with Kent’s counter to this comment. Pre-prints serve a very specific purpose and if one does not understand the true role that pre-prints are supposed to play in the world of scholarly research then they become dangerous. This is true for all research areas, not just biomedicine. In fact, biomedicine is one of the leading subject areas that cite predatory journals even in formal publications (a recent post by Scholarly Kitchen), making them incredibly dangerous especially if we cannot monitor published work through peer-review.

the public will continue to be mis-lead by academic research until we can increase the public’s digital literacy, information literacy, and science literacy. This is what media and news have historically preyed upon and this is how we should fight back to make OA a meaningful movement.

If the funders and other proponents of Open Access publishing are so bent on providing everything for free, why haven’t they founded their own journals and provided the multi-million dollar endowments necessary so keep them functioning and publishing “for free”? After all, they hold the billion dollar patents that resulted from the work they funded; work which often depended in large part on work done by others, and reported to their researchers in those dastardly archival journals they keep complaining about.

Actually, many of them have done just this. Although the majority of Gold OA articles are published under an “author-pays” model, the majority of Gold OA journals actually charge no author fees, which means they are underwritten by the institutional publishers of those journals.

Further discussion of the Gold OA publishing landscape can be found here.

This is actually (inter alia) a very good argument against the transformative schemes that are now the rage du jour. Advocates of the latter are counting on funding from two sources. (1.) funding of APCs via government subsidies, via grants (and who knows, maybe down the road they’ll start pushing for more direct government subsidy mechanisms, and perhaps this is how at least some of the European schemes work.) This is fraught with all kinds of problems, having to do with ideological manipulation of knowledge distribution channels. I suppose it will sound histrionic to many readers from a younger generation, but what of the 20th century instances of this sort of thing? Cf. Lysenkoism, or the “Aryan physics” movement. But also failure to address why govt subsidizing of scholarly publishing is really necessary, given that universities have (in large measure) failed to develop their own journal publishing systems. (2.) funding of APCs via library budgets. I don’t see sufficient concern by libraries about the opportunity costs of diverting significant parts of their budgets over to APCs. What other things won’t libraries be able to buy, such as bibliographic databases and monographs?

The cost of focusing on APCs is that of dirupting entire ecosystems of equitable participation in fair competition open to all potential authors without limits. The opportunity costs for libraries and their institutions include failing to address the needs of their faculty students in having more no-fee OA publishing venues to choose from, via promoting and encouraging the APC based venues they payed for that may exclude their peers from participation. The illusion of the purchased competitive advantage of being able to exclusively publish in venues covered by their agreements may turn out later to become actual disadvantage of having invested in venues not highly regarded by wider communities.

Interesting discussion as always. Rick always does a good job of any analysis he makes and his summary here is one of his bests. I just want to point out that preprints in physics while popular, the best papers also end up in the journals published by AIP journals and the AIP package of journals are bought by research libraries around the world. Researchers in physics still want to be published in high impact journals.
My second point is what is the measure of the tipping point? Yes OA journals have grown and nearly every publisher has a line of OA journals. However the subscription sales have not diminished. The top 10 STM Publihers are still making 90% of their revenue from subscription services both journals, databases and related products. Where is the tipping point?

Thanks for the kind words, Dan.

As for tipping points: my impression is that the term is usually used more aspirationally than analytically. Most often, I think, “We’re at a tipping point!” is a cry to encourage the troops and rally them to greater effort, rather than a data-driven conclusion reached after analysis.

I believe that there is another unintended consequence which is only beginning to be seen, but has a very large impact and that is on the future of libraries. Libraries have traditionally been store houses of valuable publications. They acquire, curate, classify, store, and then provide access to the collections for their constituents. Digital collections have given the users remote access, no need to go to the library, less space is needed so libraries are converting to study halls, coffee shops and exhibit spaces. Administrators are looking at the central and valuable real estate and repurposing the library spaces to other needs within the university. It caused closing of corporate libraries when access to the databases became widespread self-service and the end user needs were catered to in the late 1990’s through the 2000’s. Now the University collections face a similar fate. Search is easier, and arguably better, via Google than the library OPAC, discovery system and many publisher platforms. Therefore, the only task libraries remain holding is the power of the purse, libraries purchase access to the materials for the users. The other tasks of curate, classify, store, and then provide access are already being replaced. With Open Access widely and freely available and searchable through Google there is no need to purchase anything. Then library as we know it becomes unnecessary except for special collections. In lobbying heavily for open access, the librarians imperil their own future

Please, before the knee jerk reaction begins. I love libraries and have made my career within them. I do not wish for their demise, I am merely pointing out scenarios I see in progress.

I understand that librarians have indeed have imperiled their own future and it is true that the UL’s are in a constant battle to preserve their highly prized landscape. However searching Google has its limitations and students need help in finding material. While OA has taken over the field, 90% of the information is still behind a paywall and shows little decline. Go into any major research library on Sunday evening and you will find the library filled with students. The law schools libraries like NYU and others are filled with students even where 80 % of the material is available in electronic form.
Students still place a high value on libraries as a place to study. Take away the library and you will have a riot on your hands. After all the rhetoric about OA there is still an active group of librarians that have been providing service and will continue to do so..

I’m not sure this contradicts Ms Hlava’s point. Libraries are being turned into study spaces for students. That’s why they’re full, the students needs a place to work given their cramped and noisy abodes. Bound books are being moved into storage at many university libraries as they await the full digital turn — for example, humanities monographs available preponderantly or only in digital format. (Students and faculty, meanwhile, are out in front of this turn through the massive, and perhaps some readers don’t realize how massive, use of pirate sites. I know graduate students who *never* step foor in their library, getting everything they need from wherever they are with a click on a pirate site.) Real estate, the digital turn, staffing cuts, piracy, and frankly the decline in serious reading — a recently retired professor in the humanities at a good university recently remarked to me that in his final working years “getting undergraduates to buy or read a book was swimming against a tide” — are all conspiring to make tomorrow’s university library unrecognizeable to those of us over 40 or 50. They’re going to be sheds for students studying, with a server in the basement.

I’m revisiting my comments and hope not to have suggested that critics of preprints are committed to censorship. The question, which was initially rhetorically phrased and probably infelicitiously or too strongly (no malice intended, it was written on the fly), however still remains. Here’s the question, again: what are the policy recommendations of those who criticize preprints as an appropriate medium of scientific communication? What I’ve heard from the preprints critics is pretty strongly phrased but it’s not clear what actions they would propose taking to remedy the situation.
So what exactly do the critics have in mind? Lobbying by appropriate groups or societies to shut down certain preprint servers? Professional sanctions on scientists who use them? Which scientists–just biomed scientists, or does this apply to say civil engineering or statistical physics, e.g.? Ingelfinger’s rule widely observed among editors? Or something lighter, such as “read my criticisms and hopefully you won’t publish a preprint”? Just trying to understand. The other piece of the question is: why wouldn’t whatever policy implications the criticisms have, also extend to other formats of communication: newspapers, magazines, blogsites, tweets and whatever, in which a journalist or writer gets the science wrong or misleads people?
Nor have I heard a good reply to the suggestion that appropriate language accompanying any preprint would be a teaching moment about what peer review is and how a preprint is not peer reviewed (even if a preprint server provides a venue to disagree with other preprints, a sort of peer review even if not the real item.) It would be a disclaimer, easy to understand.
It’s one thing to criticize a format–we should all be doing this–but it’s a distinct question what one actually proposes from a policy standpoint. These are very different things.
Kent Anderson’s assumptions about the reading public are a tad worrisome. Are the type of people who would read a preprint typically folks who would easily be duped or entirely uncritical, particularly if the type of disclaimer mentioned above is in bold letters at the top of every preprint? Ditto people who read an unprofessional journalist’s rendering of what’s in a preprint. Certainly the danger exists of people (sadly, too, benighted journalists) being mislead, but well before most people read preprints, they’ll already have done google searching that brings up tons of misinformation about any topic. Shouldn’t be at least as worried about the latter, and then where does this concern stop?
I’m interested in hearing the answer, since I take the counterarguments seriously, even if in the final analysis I may disagree. I certainly look forward to reading Mr. Anderson’s recent paper from Learned Publishing when a moment arises.

My perspective on “the reading public” is based on results. Professional journalists, editors, and people in major positions of authority are routinely “duped” by preprints — covering them uncritically and apparently believing they are “papers” in the same was a “paper” in PNAS is a “paper.” So, let’s be clear, my perspective is based on evidence, not assumptions. I think the assumptions lie elsewhere — mainly with the preprint crowd — and those assumptions are being refuted by evidence.

As for equating this with “well, misinformation is everywhere,” that’s a defeatist attitude I can’t abide. Again, our standards should be higher, and we should lead the way, not fall prey to lowest common denominator standards set extrinsically.

Thanks for getting back.
I ‘m not going to dispute your claims about an evidential base, but there is another evidential base in at least one area: the widespread and very long term acceptance of preprints in one subject domain, physics.
Also, I haven’t seen a reply to the suggestion that preprints could be marked appropriately, nor my repeated question about what you take to be *policy* implications you propose.
I’m certainly not convinced for the p
This claim about a “preprints crowd” certainly misses a lot of subtleties. Some of us are trying to work through precisely the claim that some of the “non-preprints crowd” are trying to work through, namely, are there certain subject domains in which–yes–preprints are not appropriate, particularly medical or biomedical areas? I’m not convinced until I see the counterarguments.

Somehow the comment posted, hit the wrong key. Will add: I’m certainly not convinced that preprints are not appropriate in a wide range of subject areas and would ask for a broader attempt to entertain the wide range of counterarguments to your position.
I spoke with the head of the NLM after the points you made after her talk. We discussed a bit the need to create a very clear statement about the meaning of a preprint.
Also, I’m not clear on your comments about how preprints represent some sort of defeatist attitude. Is the hope that we’ll somehow be able set higher standards that *also* extend to blog posts, com boxes, op-ed pieces in major newspapers, poster sessions, popular books about science? That’s one pretty big project.
The project of in general getting journalists, editors, blog posters, and so to set high standards is likewise an immense one.

Your notion that because sloppy content distribution occurs elsewhere, we can allow it, seems to me a defeatist attitude. We don’t need to accept what other areas of the information space are playing with, and we have higher and different standards.

I think preprint servers should be designed for purpose. If the purpose is to give a small group of peers early access to non-reviewed drafts so they can help authors improve those prior to submission to peer-reviewed outlets, then there’s no need for preprints to be public.

The framing of Poynder’s work deserves a mention — defeat and victory. Framing something as win/lose makes it inherently political. Is that appropriate when it comes to publishing business models and their effects on editorial quality and market postures? Is it a desirable framework? If so, then we are talking about a political argument, which is about power, not right or wrong. That’s worth contemplating.

Other possible ways he could have framed his report? Strengths/weaknesses. Pros/cons. Benefits/costs. Each of these would frame with inherent nuance, and point to substantive issues from the start.

Two of the side-effects of casting things in a win/lose framework are two things we need less of in my opinion — false equivalency and binary thinking.

If you accept the win/lose frame, you accept that both sides are equally valid on the field of play, and there is only one possible outcome — ascendency of one, defeat of the other. There is no room for reconciliation, further exploration, compromise, or mutual existence. There is only the battle, and the outcome, and both sides are immediately cast as equally viable. The win/lose paradigm has a lot of potent side-effects on thinking.

Just contemplating this out loud in the comments. Maybe we shouldn’t indulge in win/lose thinking when it comes to OA or anything else. Maybe we should use what works best, even if that’s a blend of current ideas or new ideas completely.

Except, for the point about the moral basis of OA. If it is the morally right way to publish, as I believe, then a two-sided debate emerges. The question of the level of APCs, corporate dominance, Chinese censorship, etc. all stem from that question.
Dan Brockington’s analysis of MDPI actually addresses that second set of questions more than the moral one, but is very useful in this context, particularly given the strong links to China of that company. https://danbrockington.com/2019/12/04/an-open-letter-to-mdpi-publishing/

Is someone going to be summarizing the NISO conference that was held right after Charleston–about preprints?
In my view preprints are symbiotic with peer review journals and am working on making my own preprint all this stuff more subtle, to distance it a bit from the overlay model, but retaining the same basic idea.
Plus I hope in the coming year to work on a publicly accessible narrative about how the model discussed in the current version merely expresses more than three centuries of how scientific communication has been done, mutatis mutandis, of course. This history is at least as much of an “evidential” base as any study confined to recent data about what works or what doesn’t.
Let there be a fully reckoning of recent data but also longstanding practice.
(Btw, one of my jobs at Penn was to dismantle the large collection of print preprints in physics areas in the mid-1990’s. Now the print distribution of all this stuff–consider the mailing costs– bespeaks a longstanding and embedded tradition.

I can’t believe I’m saying this, but I agree with what Kent Anderson says above. Every word. I’m a bit dismayed that this post, which deserves more discussion and debate, has mainly produced a side discussion about green OA, although some “bombs” have been dropped here that are worth discussing further, such as the changing roles of libraries under the aegis of digital everything. I have some thoughts myself but am trying harder than usual to not work today, so I’ll stop here…for now.

Has there been any serious study to what happens to the overall STM publishing labor force should the OA model become the dominant or only model of publishing? There may well be but I often feel when I read articles on the subject it is not mentioned or barely mentioned. Or I should say it is implied as it is here when you say “real costs of providing publishing services”. Those costs of course mean people among other things. On a journal by journal basis flipping to complete OA would almost certainly mean an overall loss of revenue for many journals. That would mean reducing editorial support, reducing services and spreading editorial services across many more journals to share the costs. I know a large profitable company like Elsevier has been the main target of much of the ire of the OA community and there is little sympathy for them. However, this industry employs many people and any fundamental changes to the business model has enormous consequences for all of them.

This is a great and important question, especially because full Gold OA (which I’m assuming you’re flagging) would lead to a fundamental shift from a “trusted intermediary” role serving readers to a “production house” mode serving producers/funders.

One thing that has occurred to me is that if revenues become so tightly tied to publication events, get ready to work weekends, especially near year-end.

Right now, with recurring revenues moderating the economics to allow for independent editorial decision-making not tied directly to revenues and paper output, there aren’t pressures to do this, and working weekends is usually based on trying to speed great papers through, not to meet production quotas tied directly to revenues.

In a “full OA” world, outsourcing and the down-sampling of the workforce would also become even more prevalent, I’d think.

Editorial offices, often undervalued, are the hub for the editors, authors, reviewers and readers. I probably interact with at least one of each type every day. Any decent journal builds up loyalty within these audiences. Editorial support is really customer service. This would all change under “full OA” as offices go from 6 to 5 people, 5 to 4, 3 to 2 and 2 to 1.5 and so on. And how many journals, unable to meet their quantity goals will go under as the larger journals just took more and more papers to meet their quantity goals? Papers rejected from one journal can feed dozens of other journals but what happens when to meet revenue goals rejection rates plummet? Those editorial offices go to zero. There are just so many facets to this.

Comments are closed.