Among the more thankless tasks in god’s creation is that of the editor. Authors of scholarly materials rarely acknowledge their debt to their editors and may even resent their perfidious scrutiny of their texts. Readers don’t understand the editor’s role — understandably, perhaps, as it is largely invisible to the reader, who imagines him or herself in direct communion with the living spirit of the author. Our current cultural aversion to anything that smacks of authority or authority structures (this too shall pass — or we will) puts editors into the crosshairs, as they have come to represent the gatekeeper and, hence, the oppressor: It’s as though there were a coherent conspiracy to set self-reinforcing standards for the ruling class. Where once we had Maxwell Perkins, now we have a pigeon-flecked statue of Columbus torn from its pedestal.
The current war against scholarly editors takes many forms, but the most deadly are (a) conflating editorial work with peer review and (b) starving organizations for the money they need to maintain significant editorial operations. I say “maintain” advisedly: there is to my knowledge no effort underway to initiate an editorial operation of the kind we see at, say, The New England Journal of Medicine (NEJM) or Science. Editorial operations of that kind could only have come into being in the past and those that persevere today owe their existence to their early origins. Indeed, even the work at NEJM and Science, to cite just two of the truly prestigious brands in STM publishing, is regularly derided by opponents of “bench” or “desk” editing as “subjective.” Peer review is sufficient; no need to bring in the gratuitous comments of editors who are not working scientists (even if they were trained as scientists). It would take a dramatic change in the climate for people to understand the word “subjective” not as “not true” or “not based on empirical evidence” but as “point of view.” The subjectivity of an editor is a hypothesis; the experiment is the act of publication; the results are measured in the marketplace. Viewed in this way, Nature and The Lancet have proven themselves to be brilliant hypotheses.
While advocates of traditional publishing often criticize open access (OA) publishing as lacking in editorial standards, this is not necessarily so. Green OA has the same editorial standards as the traditional publications that provide the articles for a Green deposit into a repository. Gold OA is a different matter, however, as the “author-pays” aspect of it limits the payment to what the traffic — meaning the author or his or her benefactor — will bear. Kitchen readers have heard me make the point about the average revenue per article before: If the journals industry has combined revenues of $10 billion, and the number of articles published each year is around 2 million, then the average revenue per article is about $5,000. In an all-Gold world, publishers with revenue greater than $5,000 per article (which includes every one of the most prestigious journals) are highly exposed, especially when some Gold publishers charge as little as $1,500 per article. Thus in a dystopian future where Gold OA dominates, there will be insufficient revenue to cover the high editorial costs of the most distinguished editorial operations. The accelerating decline and fall of the editor can thus be laid at the feet of BioMed Central, which pioneered the Gold model. Of course, not everyone will be unhappy if editors find their next career as a Starbucks barista.
Which brings us to the agencies that support Gold OA by tying OA to research grants. Why would such organizations take steps that would lead to the undermining of outstanding editorial programs? There are three possibilities, the first of which is Hanlon’s Razor. Hanlon’s Razor states that one should never attribute to malice that which is adequately explained by stupidity. Then there is the cynical view: how galling it must be to back a researcher with a large sum of money only to see the high-impact journals reject the papers that grow out of that research. Distinguished publishers, in other words, hold grants officers accountable — and who wants to be held accountable? The cynics among us suspect that funding agencies are leading the war on editors, with the aim of reducing scientific publishing to content marketing: articles become content that promotes the brands of their tax-advantaged funders. Finally, we have the Law of Unintended Consequences, whose realm is boundless. In this view (which overlaps with Hanlon’s Razor) the funding agencies are attempting to do a good thing, but don’t appreciate that their actions may serve to weaken the strongest and most distinguished editorial franchises.
I find it hard to take a generous view of the funding agencies because of the way they report their finances. A clear illustration of this was published a while back on the eLife blog. This explanation of the cost of publishing research is financially illiterate, accounting only for marginal costs and leaving out fixed costs and overhead. But, hey! who’s counting? Not including fixed costs and overhead is akin to saying that the cost of delivering higher education consists of the sum of the hours an instructor actually spends in front of a classroom. I was amused to see that eLife’s analysis now has company in the Trump administration, which is proposing cuts-that-are-not-cuts to the NIH research budget. How can this be done? Why, by not reimbursing for overhead. If this action were to be implemented, it would reduce NIH funding by about one-third, resulting in the closing down of many research projects and putting many a postdoc on the street. But again, who’s counting?
Putting Gold OA into the hands of funding bodies has the practical effect, whatever the intentions of the agencies, of making more robust editorial operations seem terribly overpriced. This why there are no new plans to create such editorial shops and why we may someday live in a world without them.
Theoretically, there is a way out of this. Post-publication peer review, by whatever name, could take the place of the editorial work that in the traditional model occurs prior to publication. And it makes a certain sense: let’s have the community at large evaluate publications. The problem is that there is to date no strong economic model for this, though organizations such as the Faculty of 1000 and Publons are trying to change that. Unless and until there is developed a strong, widespread service with solid economics, post-publication peer review (I would prefer to call it “editorial appraisal”) will not supplant the work that we now associate with our finest publications.