After a rough 2013, in which we saw the simultaneous departures of the CEO and CFO of its parent organization and the first decline in its impact factor, the first few weeks of the new year have not been kind to PLOS ONE. Two papers have attracted the wrong kind of attention, shining a light on what may be more fundamental problems with the mega-journal concept itself.
With an increasingly intense “publish or perish” culture shifting the journals market more toward the supply side, mega-journals using an OA model emerged as a natural combination. The most successful of them by far exists at PLOS, which is to their credit. They saw a market need and offered a solution. PLOS ONE has worked beyond anyone’s expectations because of this significant market need — the pressure put on researchers to constantly publish is intense and unrelenting.
At some level, the mega-journal is a symptom of a malfunction in the larger academic system. A mega-journal is a pressure valve. And where is the malfunction? As Steve Caplan wrote in a Guardian piece, deeper issues emanate from careers teetering on the brink as funding is diverted, dries up, and is stretched too thin. The pressure to publish is just a symptom of the competition these other problems create. He suggests we refocus on those more central concerns, starting upstream instead of down:
Once we have managed to stem the damage to scientists’ careers, then it will be time to address the very complex issue of how to improve the science publication system.
Authors are responding rationally to what is under their control in the “publish or perish” culture. As long as this is all driven by citations and publication lists, we will continue to see businesses spring up to match the need.
But natural events are not self-justifying, and journals are supposed to refine and define information to some significant extent. Journals are widely believed to be trusted filters. But what if a filter becomes inexplicably porous?
I recently wrote about a PLOS ONE paper about medical journal advertising which clearly lacked expert reviewers. The analysis was poorly designed, poorly executed, and its conclusions in opposition to reality. The editing was sloppy at the macro level — tables incomplete, data not jibing with text, and so forth. As something of an expert in the area, it took me less than 10 seconds to spot the most fundamental flaw in their approach. So, how did this paper get through? It wasn’t even “methodologically sound,” the basis of PLOS ONE’s editorial approach.
There are no editors in the PLOS ONE database who have “journals” or “advertising” or “business” (except for one with “business process reengineering”) in their lists of expertise.
It’s also unclear that a paper on medical journal advertising fits within even PLOS ONE’s immense scope:
PLOS ONE features reports of original research from all disciplines within science and medicine.
Is “advertising” a discipline within science and medicine? There wasn’t a qualified reviewer available, suggesting it is not — which only underscores the obvious fact that advertising is not a discipline within science and medicine. Yet the paper was put through review nonetheless, and ultimately accepted and published.
Perhaps the mega-journal model’s lack of a clearly defined audience is one of the limitations of the model. I believe that if, at the outset, the audience for this paper had been medical or scholarly publishers as defined by a journal aimed at that audience (say, Learned Publishing), the reviewers would have been better-selected, and these errors identified before publication — saving the authors embarrassment and sparing the literature a paper that ultimately doesn’t contribute signal but rather introduces noise.
These lists of editorial expertise at PLOS ONE are themselves fascinating to read, as the first one I found shows:
Biology, Case-control studies, Clinical research design, Cohort studies, Computational biology, Diabetes mellitus type 2, Diabetic endocrinology, Disease mapping, Endocrinology, Epidemiology, Gastroenterology and hepatology, Gene function, Gene networks, Genetic epidemiology, Genetic polymorphism, Genetic screens, Genetics, Genetics of disease, Genome analysis tools, Genome expression analysis, Genome sequencing, Genome-wide association studies, Genomics, Heredity, Human genetics, Linkage (genetics), Linkage maps, Liver diseases, Medicine, Meta-analyses, Nutrition, Obesity, Population genetics, Quantitative traits
The contrast between major areas of expertise — biology, medicine, epidemiology, endocrinology, nutrition — and more specific areas of expertise is striking. It suggests to me that the matching approach has very bifurcated boundaries, which would lead to consistent mismatches with reviewer expertise. Also, how can someone be a current and functional expert at all these things? I’ve known some really accomplished people in my time, and I don’t think many would list out this many areas they’d feel comfortable as peer-reviewers.
This weakness in matching papers to reviewers is described more starkly in a different post by Caplan, a biomedical researcher who writes incisively about OA and journals:
With my expertise in membrane trafficking and endocytosis – the cell biology of how proteins get from point A to point B within the cell – I was shocked to find a request to review a cardiology paper dealing with electrical pulses in the mouse heart. Just to make it absolutely clear: I don’t and never have done any mouse physiology/cardiology work, and certainly have no expertise (or even much knowledge) in that area. How desperate can a journal be? How bad can the editorial process and peer review be? Pretty bad, I surmise. Unfortunately, this was not the first – or the second – such odd request to review a paper that I’ve received.
While Caplan is not referring directly to PLOS ONE, another academic experienced the flip side of the coin, and resigned as an Associate Editor at PLOS ONE because of it.
Which brings us to another PLOS ONE paper, one I’m not qualified to evaluate, which is currently drawing attention. Why? Mostly because many claim it doesn’t add new information, and merely renames “heat maps” as “quilt plots.” In addition, its publication history has some uncomfortable aspects, as a comment left on the paper states:
This short paper was thought to offer utility to resource-poor users, and offered a simple application example. Both the academic editor (myself) and a reviewer raised concerns about the level of originality, and this issue was specifically referred back to the PLoS ONE Editorial office, where the decision to proceed was made.
Another comment from a PLOS Associate Editor explains how the decision was made, based on policies at PLOS. Ultimately, the argument seems to be over a small but critical point — Is the paper novel? — that invites other questions about the scope and purpose of PLOS ONE. Some think the paper has marginal novelty in that it offers a simpler approach to generate a cruder tool akin to a heat map, while others feel that such a derivative approach is not novel, just derivative. The crucial sentence of PLOS policy may be:
Recapitulation of existing methods, software, or databases is not useful and will not be considered for publication. Combining data and/or functionalities from other sources may be acceptable, but simpler instances (i.e. presenting a subset of an already existing database) may not be considered.
The need to write out these complicated policies is, to me, a recipe for trouble. Most journals publish the scope and boundaries as general principles — a journal for community A, publishing “the best” or “methods” or “reviews.” For instance, the mission of another very large journal, the Journal of Biological Chemistry, is published on its home page. It is clear and brief:
The Journal of Biological Chemistry publishes papers based on original research that are judged to make a novel and important contribution to understanding the molecular and cellular basis of biological processes.
The name identifies the community. Otherwise, there are no loopholes, and the editor-in-chief and other editorial members are part of the biological chemistry community, so can readily identify submissions that are out of scope. PLOS ONE uses a federated review model dependent on non-aligned reviewers and no central editor aligned with a specific group, so it’s no wonder scope has become an issue. Then there’s the difficulty interpreting, much less remembering, all the policies, especially in the absence of principles as clear and self-reinforcing as normal journal scope and quality principles. It is all reminiscent of the editorial problems we recently encountered with Wikipedia — a loose and unruly bureaucracy of information managers.
With no clear demand for novelty, and only a caution against “recapitulation” — one that doesn’t withstand scrutiny, as recapitulating methods and getting different results can be very novel — there are more questions raised about the power of policies versus principles and the trouble of getting the former right while the latter remains unclear.
In another post about this paper, one scientist writes:
There is no consistency in the enforcement of the policy, and a total lack of editorial control. This is not good and I can only see a downward spiral for PLOS ONE unless this changes. At the heart of academic publishing surely sits the basic philosophy that all papers and authors will be treated fairly and equally. I don’t think PLOS ONE are even getting close.
It’s not clear how either of these papers came to be published given the methodological problems with the first and the apparent violations of PLOS ONE policies with the second. But it’s more intriguing to speculate why they were published. In an environment where both authors and publishers have had to embrace “publish or perish” simultaneously and with increasing abandon — and where incentives (APCs and CVs) are aligned with publication — it’s natural to err on the side of publication.
The resulting volume of PLOS ONE is also suggestive of these same problems. It’s hard to know what the overall quality of the work in PLOS ONE is when papers are flowing out this fast. With more than 2,000 articles published in the first three weeks of 2014, many of these might have similar quality or scope problems, but these problems are unlikely to be detected with the acknowledged difficulty PLOS ONE has reaching readers, and the lack of incentives for readers — who want to read less anyhow — to actually take time to respond to a problematic paper rather than simply ignoring it.
One question that emerges is, “What does the brand of PLOS ONE stand for?” As I noted in a controversial 2010 post, its brand proposition may simply be, “the path of least resistance,” a journal of convenience. In a “publish or perish” environment, that convenience has led to a great deal of success. Yet, now the brand is taking on water, judging from the comments of working scientists. What is the path forward? Status quo? Or a serious reconsideration?
As others have pointed out, publishing is, to a large degree, a service industry for academia. However, part of the service we can provide is to head off the problems of academia and not simply let them through — either by passively allowing them through or actively encouraging them. As long as the “publish or perish” culture persists to the degree that it does, and journals succeed with business models that match this “publish or perish” culture stride for stride, we can expect peer-review to become more accommodating of papers of all types, the definition of “journal” to continue to drift from its traditional meaning, and citable objects to continue to be worth paying to acquire.