After a rough 2013, in which we saw the simultaneous departures of the CEO and CFO of its parent organization and the first decline in its impact factor, the first few weeks of the new year have not been kind to PLOS ONE. Two papers have attracted the wrong kind of attention, shining a light on what may be more fundamental problems with the mega-journal concept itself.
With an increasingly intense “publish or perish” culture shifting the journals market more toward the supply side, mega-journals using an OA model emerged as a natural combination. The most successful of them by far exists at PLOS, which is to their credit. They saw a market need and offered a solution. PLOS ONE has worked beyond anyone’s expectations because of this significant market need — the pressure put on researchers to constantly publish is intense and unrelenting.
At some level, the mega-journal is a symptom of a malfunction in the larger academic system. A mega-journal is a pressure valve. And where is the malfunction? As Steve Caplan wrote in a Guardian piece, deeper issues emanate from careers teetering on the brink as funding is diverted, dries up, and is stretched too thin. The pressure to publish is just a symptom of the competition these other problems create. He suggests we refocus on those more central concerns, starting upstream instead of down:
Once we have managed to stem the damage to scientists’ careers, then it will be time to address the very complex issue of how to improve the science publication system.
Authors are responding rationally to what is under their control in the “publish or perish” culture. As long as this is all driven by citations and publication lists, we will continue to see businesses spring up to match the need.
But natural events are not self-justifying, and journals are supposed to refine and define information to some significant extent. Journals are widely believed to be trusted filters. But what if a filter becomes inexplicably porous?
I recently wrote about a PLOS ONE paper about medical journal advertising which clearly lacked expert reviewers. The analysis was poorly designed, poorly executed, and its conclusions in opposition to reality. The editing was sloppy at the macro level — tables incomplete, data not jibing with text, and so forth. As something of an expert in the area, it took me less than 10 seconds to spot the most fundamental flaw in their approach. So, how did this paper get through? It wasn’t even “methodologically sound,” the basis of PLOS ONE’s editorial approach.
There are no editors in the PLOS ONE database who have “journals” or “advertising” or “business” (except for one with “business process reengineering”) in their lists of expertise.
It’s also unclear that a paper on medical journal advertising fits within even PLOS ONE’s immense scope:
PLOS ONE features reports of original research from all disciplines within science and medicine.
Is “advertising” a discipline within science and medicine? There wasn’t a qualified reviewer available, suggesting it is not — which only underscores the obvious fact that advertising is not a discipline within science and medicine. Yet the paper was put through review nonetheless, and ultimately accepted and published.
Perhaps the mega-journal model’s lack of a clearly defined audience is one of the limitations of the model. I believe that if, at the outset, the audience for this paper had been medical or scholarly publishers as defined by a journal aimed at that audience (say, Learned Publishing), the reviewers would have been better-selected, and these errors identified before publication — saving the authors embarrassment and sparing the literature a paper that ultimately doesn’t contribute signal but rather introduces noise.
These lists of editorial expertise at PLOS ONE are themselves fascinating to read, as the first one I found shows:
Biology, Case-control studies, Clinical research design, Cohort studies, Computational biology, Diabetes mellitus type 2, Diabetic endocrinology, Disease mapping, Endocrinology, Epidemiology, Gastroenterology and hepatology, Gene function, Gene networks, Genetic epidemiology, Genetic polymorphism, Genetic screens, Genetics, Genetics of disease, Genome analysis tools, Genome expression analysis, Genome sequencing, Genome-wide association studies, Genomics, Heredity, Human genetics, Linkage (genetics), Linkage maps, Liver diseases, Medicine, Meta-analyses, Nutrition, Obesity, Population genetics, Quantitative traits
The contrast between major areas of expertise — biology, medicine, epidemiology, endocrinology, nutrition — and more specific areas of expertise is striking. It suggests to me that the matching approach has very bifurcated boundaries, which would lead to consistent mismatches with reviewer expertise. Also, how can someone be a current and functional expert at all these things? I’ve known some really accomplished people in my time, and I don’t think many would list out this many areas they’d feel comfortable as peer-reviewers.
This weakness in matching papers to reviewers is described more starkly in a different post by Caplan, a biomedical researcher who writes incisively about OA and journals:
With my expertise in membrane trafficking and endocytosis – the cell biology of how proteins get from point A to point B within the cell – I was shocked to find a request to review a cardiology paper dealing with electrical pulses in the mouse heart. Just to make it absolutely clear: I don’t and never have done any mouse physiology/cardiology work, and certainly have no expertise (or even much knowledge) in that area. How desperate can a journal be? How bad can the editorial process and peer review be? Pretty bad, I surmise. Unfortunately, this was not the first – or the second – such odd request to review a paper that I’ve received.
While Caplan is not referring directly to PLOS ONE, another academic experienced the flip side of the coin, and resigned as an Associate Editor at PLOS ONE because of it.
Which brings us to another PLOS ONE paper, one I’m not qualified to evaluate, which is currently drawing attention. Why? Mostly because many claim it doesn’t add new information, and merely renames “heat maps” as “quilt plots.” In addition, its publication history has some uncomfortable aspects, as a comment left on the paper states:
This short paper was thought to offer utility to resource-poor users, and offered a simple application example. Both the academic editor (myself) and a reviewer raised concerns about the level of originality, and this issue was specifically referred back to the PLoS ONE Editorial office, where the decision to proceed was made.
Another comment from a PLOS Associate Editor explains how the decision was made, based on policies at PLOS. Ultimately, the argument seems to be over a small but critical point — Is the paper novel? — that invites other questions about the scope and purpose of PLOS ONE. Some think the paper has marginal novelty in that it offers a simpler approach to generate a cruder tool akin to a heat map, while others feel that such a derivative approach is not novel, just derivative. The crucial sentence of PLOS policy may be:
Recapitulation of existing methods, software, or databases is not useful and will not be considered for publication. Combining data and/or functionalities from other sources may be acceptable, but simpler instances (i.e. presenting a subset of an already existing database) may not be considered.
The need to write out these complicated policies is, to me, a recipe for trouble. Most journals publish the scope and boundaries as general principles — a journal for community A, publishing “the best” or “methods” or “reviews.” For instance, the mission of another very large journal, the Journal of Biological Chemistry, is published on its home page. It is clear and brief:
The Journal of Biological Chemistry publishes papers based on original research that are judged to make a novel and important contribution to understanding the molecular and cellular basis of biological processes.
The name identifies the community. Otherwise, there are no loopholes, and the editor-in-chief and other editorial members are part of the biological chemistry community, so can readily identify submissions that are out of scope. PLOS ONE uses a federated review model dependent on non-aligned reviewers and no central editor aligned with a specific group, so it’s no wonder scope has become an issue. Then there’s the difficulty interpreting, much less remembering, all the policies, especially in the absence of principles as clear and self-reinforcing as normal journal scope and quality principles. It is all reminiscent of the editorial problems we recently encountered with Wikipedia — a loose and unruly bureaucracy of information managers.
With no clear demand for novelty, and only a caution against “recapitulation” — one that doesn’t withstand scrutiny, as recapitulating methods and getting different results can be very novel — there are more questions raised about the power of policies versus principles and the trouble of getting the former right while the latter remains unclear.
In another post about this paper, one scientist writes:
There is no consistency in the enforcement of the policy, and a total lack of editorial control. This is not good and I can only see a downward spiral for PLOS ONE unless this changes. At the heart of academic publishing surely sits the basic philosophy that all papers and authors will be treated fairly and equally. I don’t think PLOS ONE are even getting close.
It’s not clear how either of these papers came to be published given the methodological problems with the first and the apparent violations of PLOS ONE policies with the second. But it’s more intriguing to speculate why they were published. In an environment where both authors and publishers have had to embrace “publish or perish” simultaneously and with increasing abandon — and where incentives (APCs and CVs) are aligned with publication — it’s natural to err on the side of publication.
The resulting volume of PLOS ONE is also suggestive of these same problems. It’s hard to know what the overall quality of the work in PLOS ONE is when papers are flowing out this fast. With more than 2,000 articles published in the first three weeks of 2014, many of these might have similar quality or scope problems, but these problems are unlikely to be detected with the acknowledged difficulty PLOS ONE has reaching readers, and the lack of incentives for readers — who want to read less anyhow — to actually take time to respond to a problematic paper rather than simply ignoring it.
One question that emerges is, “What does the brand of PLOS ONE stand for?” As I noted in a controversial 2010 post, its brand proposition may simply be, “the path of least resistance,” a journal of convenience. In a “publish or perish” environment, that convenience has led to a great deal of success. Yet, now the brand is taking on water, judging from the comments of working scientists. What is the path forward? Status quo? Or a serious reconsideration?
As others have pointed out, publishing is, to a large degree, a service industry for academia. However, part of the service we can provide is to head off the problems of academia and not simply let them through — either by passively allowing them through or actively encouraging them. As long as the “publish or perish” culture persists to the degree that it does, and journals succeed with business models that match this “publish or perish” culture stride for stride, we can expect peer-review to become more accommodating of papers of all types, the definition of “journal” to continue to drift from its traditional meaning, and citable objects to continue to be worth paying to acquire.
Discussion
18 Thoughts on "Can Mega-journals Maintain Boundaries When They and Their Customers Align on "Publish or Perish"?"
Another disturbing thing about that medical advertising paper is that one of the authors has publicly declared that the study, “…is clearly an estimate…Was meant to start a debate + push for real figures.” (https://twitter.com/BradenONeill/status/425620435057643520). That would also seem to be a violation of PLOS ONE’s publication criteria as it does not accept “Hypothesis or proposal papers”, “Letters, commentaries or essays”, “Opinion pieces” or “Policy papers” (http://www.plosone.org/static/publication#reporting), all of which this could easily be considered if we take the author at his word and it wasn’t meant to be an accurate research study but instead a piece of advocacy meant to drive an agenda.
The heat map paper is troubling in a different manner. If an editorial policy to not consider a work’s significance or value in the review process goes so far as to ignore whether it is an original work or a restatement of previous work or common knowledge, then it seems ripe for abuse. What’s to prevent an author from padding his CV by republishing the same paper several times? It would seem that as long as an author avoids using the same words and figures to get around plagiarism issues, they could just keep publishing the same conclusion over and over again (assuming they have $1350 to burn each time).
“Another disturbing thing about that medical advertising paper is that one of the authors has publicly declared that the study, “…is clearly an estimate…Was meant to start a debate + push for real figures.”
It looks like that tweet was made well after the article was published and that was not stated or anyway indicated in the manuscript at least that I can see. I don’t think it is fair to blame PLoS, the editors or the reviewers for that issue.
As for your last paragraph, the PLOS ONE review criteria are pretty clear.
“Results reported have not been published elsewhere.”
If followed, it would seem to address your concern.
PLOS ONE dropped the ball on the medical advertising paper, no question about it. Subscription journals make some pretty dumb mistakes as well. I hate to bring up a well worn example but…
http://www.sciencemag.org/content/332/6034/1163
Does the APC business model encourage this? Yes, if a publisher takes a very short view. PLOS ONE like any other journal lives on its reputation. OA Publishers who want to stay in business and flourish have as much incentive as subscription journals to be rigorous in their reviews.
It looks like that tweet was made well after the article was published and that was not stated or anyway indicated in the manuscript at least that I can see. I don’t think it is fair to blame PLoS, the editors or the reviewers for that issue.
I don’t know, reading the article, it seemed pretty obvious to me that there was an agenda in play here. I suspect that because the agenda was likely in line with the beliefs of those reviewing the paper, it may have been given an extra amount of slack.
As for your last paragraph, the PLOS ONE review criteria are pretty clear.
“Results reported have not been published elsewhere.”
If followed, it would seem to address your concern.
It was suggested to me that this criterion specifically refers to plagiarism, not to the originality of the results themselves, that there’s no requirement for the information to be new, just rigorously derived: https://twitter.com/AdamGDunn/status/425753312101539840
If true, then all you need to do is write new text and create some different figures. If you’re really scrupulous about it, publish the research, then run another set of trials and publish it again.
PLOS ONE dropped the ball on the medical advertising paper, no question about it. Subscription journals make some pretty dumb mistakes as well. I hate to bring up a well worn example but…
So shall we add PLOS ONE to the list of “glamour” journals to boycott?
Here is PLOS ONE’s full published criteria for “Results reported have not been published elsewhere”
——————————————————————————
Results reported have not been published elsewhere.
Previously Published Studies
PLOS ONE does not accept for publication studies that have already been published, in whole or in part, elsewhere in the peer-reviewed literature. In addition, we will not consider submissions that are currently under consideration for publication elsewhere.
All figures included in manuscripts should be original, and should not have been published in any previous publications.
We will consider manuscripts that have been deposited in preprint servers such as arXiv or published in a thesis. We will also consider work that has been presented at conferences.
Replication Studies
If a submitted study replicates or is very similar to previous work, authors must provide a sound scientific rationale for the submitted work and clearly reference and discuss the existing literature. Submissions that replicate or are derivative of existing work will likely be rejected if authors do not provide adequate justification.
——————————————————————————–
It seems pretty clear what what they mean.
Yeah, if only PLOS ONE could be more like subscription journals, which never ever publish questionable or low-quality pap…OHWAIT
Not sure whether to characterize this as a “red herring” argument or a “strawman”. No one is claiming that all journals other than PLOS ONE are perfect. The point being made here is that different business models respond to different pressures. If your response is that it’s okay for PLOS ONE to have problematic and unenforced standards because other journals are bad too doesn’t do much to help solve the issue.
It seems to me that PLOS learned the hard lesson that publishing is expensive. There are costs one never considers when just looking at a journal’s cover and its TOC. It seems cheap!
One faces two problems:
1. The initial attraction of the new idea to reputable authors is great. As time goes on, those with reputations to maintain do not want to be associated with those who don’t and the result is a loss of authorship – IF going down – and the publishing of the paper which brought this discussion about.
2. PLOS has learned that it is not cheap to be a publisher. In fact, one needs a great deal of money to publish, even using the PLOS model. This leads to PLOS being a victim of the monster it created because once you create it you have to feed it, you have to feed it because it demands being fed. The bigger it gets the more food it requires and if need be to meet that demand the quality of the food purchased may be less than initially offered.
Kent, your critique of PLOS 1 would be more compelling if you had a track record of expressing similar concerns in relation to research published in more traditional academic journals (ala John Ioannadis and Richard Smith), or of supporting calls for improvements in the standards of reporting of research (I’m thinking of the CONSORT standards and the recent articles in the Lancet). Please correct me if I’m wrong (with some pointers to stuff you’ve published), but without a sense that you are engaging in the wider debate about the quality of published research your criticism of PLOS 1 looks narrow and partisan.
https://en.wikipedia.org/wiki/Ad_hominem
An ad hominem (Latin for “to the man” or “to the person”), short for argumentum ad hominem, is a general category of fallacies in which a claim or argument is rejected on the basis of some irrelevant fact about the author of or the person presenting the claim or argument. Fallacious Ad hominem reasoning is normally categorized as an informal fallacy, more precisely as a genetic fallacy, a subcategory of fallacies of irrelevance
This blog is concerned with publishing, so when an analysis of medical journal advertising was published, someone alerted me to it, I read it, immediately saw a problem, and wrote about it. Why? Because when it comes to publishing, especially medical journal publishing, I have some expertise, and the issue fits within the bounds of this blog.
Subsequent to that, word emerged that in January 2014, this was not the only problematic paper published in PLOS ONE, which is a venture that gets a lot of attention in scholarly publishing. It gets a lot of coverage here, both encouraging and critical. (See http://scholarlykitchen.sspnet.org//?s=plos+one for a good list.) But this, again, is about publishing, particularly the peer review process and the definitions of audience, purpose, scope, and criteria. As far as journals, editors, and publishers have an effect on what is published, I’ve been working my entire career on making things better, and continue to do so (see yesterday’s announcement, here: http://www.eurekalert.org/pub_releases/2014-01/joba-sie012814.php).
If you want to criticize particular points I got wrong, I’m happy to respond. But this is ad hominem otherwise.
This topic and the other belong on this blog. In fact, this blog is, to some increasing extent, a major part of “engaging in the wider debate about the quality of published research,” but it’s focused on the domain we cover.
We know our scope and audience. That’s why these posts are here, and why we don’t attempt regular reviews of scientific papers that don’t touch on publishing. So, thanks for helping me underscore a major point of the post. We have a purpose, an audience, and a defined scope (scholarly publishing). And given that clear scope, we’re pretty effective at providing interesting and helpful perspectives and insights, from what I can tell.
Might it not be useful to look at the other components of this issue and consider it all as one system? For example, it may be that Promotion and Tenure committeees contribute to the problem by weighting external evaluations such as the publication record too heavily. Then what of post-publication review? Might some mechanism for such review serve as a check or balance? Zooming out to a systemic point of view could stimulate novel solutions. Of course, it might also be boggling.
There’s much to be said for “The buck stops here!” If you like what you see in my journal, thank you, I’ll pass it along to the team. If you don’t, well, you’ve come to the right place.
Easy for a small independent journal, hard for a mega journal. Perhaps we will see PLOS ONE subdivide into manageable specialty areas? If they do, PLOS ONE will look at lot more like a traditional publisher.
Re manageability, PLOS CEO Elizabeth Marincola was quoted last week at the ALPSP Masters in Publishing webinar (#MIPMarincola on twitter) as saying that the distinction between PLOS journals will disappear over time, but I guess that wouldn’t rule out future subdivision. Also quoted as saying “The disappearance of journals as we know them today? Yes, that’s entirely possible”.
Her statement is both troubling and encouraging to me. Troubling, because it looks like PLOS papers will be treated as commodities, with reviews churned out by a vast, mostly impartial system.
Encouraging, because commoditization would leave open the field for smaller, “boutique” journals like mine that would offer authors a higher level of thoughtful review and service. These journals might have a significant advantage in attracting the best authors.
This just out today, perhaps another consequence of limited central authority:
Concern over tacit conflicts of interest in PLoS ONE peer reviews
http://www.timeshighereducation.co.uk/news/concern-over-tacit-conflicts-of-interest-in-plos-one-peer-reviews/2010908.article