Scale remains a defining factor in the current age of scholarly publishing. Economies of scale are driving the consolidation of the industry under a few large players and pushing toward an end to the small, independent publisher.When we think about scale, we tend to think about big, commercial publishers gathering together thousands of journals, but there are other ways to achieve scale in scholarly publishing. Megajournals (and entire publishing houses) are tapping into economies of scale by decentralizing the editorial process. The benefits of this decentralization, however, come with costs, at least in terms of quality control and filtration.
The economic benefits of scale for publishers are obvious, as you pay lower prices for services, materials and personnel when you buy in bulk. Consolidation is the state of the market and the big publishers keep getting bigger, benefiting more and more from the resulting scale. But scale also tends to exacerbate the complex nature of journal publishing platforms and processes. We saw a good example of this complexity last year when a society-owned journal moved from publishing with Wiley to publishing with Elsevier, and some articles that were meant to be open access were not immediately made so. When even a single journal moves to a new platform, there are often countless moving parts with which one must deal. When working with this level of scale, things fall through the cracks, mistakes get made, and hopefully, over time, corrected.
There are, however, other approaches to scale beyond just being a really big publisher with lots and lots of titles. Some of the more interesting experiments in the journals market have been geared toward decentralization, that is, looking to benefit from scale by spreading the editorial work of the journal (or journals) broadly. PLOS and Frontiers have both had success with these approaches, but both have also recently had setbacks showing some of the cracks inherent in ceding editorial control to thousands of independent editors.
PLOS ONE is, without a doubt, the greatest success story in journal publishing for the current century. What started as an experiment in trying a new approach to peer review turned out to fill an unserved market need. It has been successful enough to carry the entire PLOS program into profit, freeing it from relying on charitable donations. But publishing 30,000 articles (and with rejections, fielding at least another 10,000) presents a major editorial challenge. How do you handle that many articles? PLOS, at least according to their 990 declarations, has approached this problem via an enormous amount of outsourcing. The outsourced functions most strikingly include hiring third party managing editors through firms offering such services. These outsourced managing editors are responsible for making sure submissions are complete and coordinating peer review.
But that only covers the administrative parts of article handling, what about editorial decision-making? PLOS ONE has some 6,100 editors. Rather than funneling everything through an Editor in Chief, the peer review and decision-making process is spread broadly. These editors “oversee the peer review process for the journal, including evaluating submissions, selecting reviewers and assessing their comments, and making editorial decisions.” But even with that many editors on-hand, a journal as broad as PLOS ONE still runs into occasional issues with editorial expertise. It is harder and harder for everyone to find good peer reviewers in a timely manner and the sheer bulk of PLOS ONE sometimes leads to papers being handled by editors without expertise in the research covered.
Without a central point where “the buck stops,” the quality of the review process can be quite variable. Mistakes are made, such as the recently published “Creator” paper with this sentence in the Abstract, “Hand coordination should indicate the mystery of the Creator’s invention.” The authors claimed that this was a mistranslation (they are not native English speakers) and the paper was subsequently retracted.
Let’s be clear — this was not a typical paper, and the vast majority of what PLOS ONE publishes is rigorously reviewed to meet the journal’s standards. But without the Sauronic Eye of an Editor-in-Chief to enforce standards and provide quality control, you’re going to run into papers where someone took a shortcut, didn’t quite do the work, or has an agenda beyond the journal’s stated vision for publication. There is no consistent level of quality control because there are 6,100 different sets of standards being used and no central point where they come together.
Frontiers, the open access publisher, has seen its own share of controversy lately. They were recently declared a “predatory publisher” by Jeffrey Beall, and journalist Leonid Schneider has written extensively about their various issues. Like PLOS ONE, Frontiers has recently had their own nonsense paper published and retracted, and this is part of a larger pattern where the behavior of the publisher’s 55,000 editors (covering 55 journals) is incredibly varied as far as how well it upholds the stated standards for publication.
I don’t think the term “predatory” is accurate for Frontiers, which continues to run some superb journals. The problem is not that Frontiers is making a deliberate attempt to deceive, rather that there is simply an institutional structure that makes quality control very difficult. The editorial strategy chosen by Frontiers is oriented toward crowdsourcing and away from careful curation and scrutiny. When one deals with such large quantities, you get into bell curves and averages. Some of the 55,000 editors are very good at their jobs, others not so much. As with PLOS ONE, a broad net is cast for editorial talent, and the resulting performance is wildly inconsistent.
Crowdsourced editorial management is a deliberate strategy — it cuts costs and likely speeds the review process. The gospel of digital disruption has supposedly taught us that the “good enough” product usually wins over the high quality (but more expensive to produce and higher priced) product. The question that must be asked then, is whether these decentralized approaches are “good enough” for the research literature? Is the success to failure ratio acceptable? Given the bulk of the journals in question, do we even have an accurate picture of the success to failure ratio? The Creator paper sat around for two months before a prominent blogger happened to notice it and fired up the internet’s outrage machine. What other timebombs are lurking in the enormous archives of these publications?
All journals make mistakes and have to issue corrections and retractions, to be sure, but are we willing to accept mistakes that are due to a fundamental lack of oversight, with no one really checking to see that the article was indeed properly reviewed (or reviewed at all)? From a psychological perspective, it sometimes doesn’t even matter if your ratio of quality to mistakes is 5000 to 1, if that one case is egregious. The PLOS ONE editor who let through a sexist peer review comment suggesting that a paper could have benefited from a male author made front page headlines and really harmed the journal’s brand. Put another way, it takes years of hard work to build a reputation for quality, but quality is a very fragile attribute and can be destroyed quickly when something like #CreatorGate surfaces. One prominent researcher went so far as to declare the journal “a joke”, wiping out years of reputation building.
…the reputations of journals are used as an indicator of the importance to a field of the work published therein. Some specialties hold dozens of journals—too many for anyone to possibly read. Over time, however, each field develops a hierarchy of titles…This hierarchy allows a researcher to keep track of the journals in her subspecialty, the top few journals in her field, and a very few generalist publications, thereby reasonably keeping up with the research that is relevant to her work.
Ask any researcher in any field and they can tell you the journals which publish the best work that is most relevant to their own research. When faced with an enormous stack of reading, it’s really helpful to be able to prioritize, to know which papers to read first. A good Editor-in-Chief or Editorial Board sets a clear standard for quality and gives a journal its “personality”, which can enable that sort of filtering. When you have 1,000 independent editors each following their own set of rules, the personality of the journal gets diluted, if not lost altogether, and the researcher loses a valuable tool.
Given the numbers of papers published through such decentralized approaches, there is clear market demand for the services these journals offer. But when one looks at the furor that arises around these sorts of blatant editorial errors, it is clear that mistakes of this sort are unacceptable to the community. Editors have a solemn responsibility to strive for quality in all efforts, and a journal’s reputation is based on someone setting standards and consistently enforcing them. Turn that over to a crowd of editors and the resulting articles are likely going to be all over the place. Does reputation still matter? Is this “good enough” for the scholarly literature?