A recent article in PLoS Medicine crossed my threshold. It’s not a very good article, and normally I’d let it slip into oblivion on its own, but its arguments are so labored and tiresome that it got under my skin a little.

The authors’ basic assertion is that an oligopoly of top-tier journals is distorting science by hogging more than the proper share of the “economics” of information.

The article goes off the rails pretty quickly, comparing research reports to economic commodities. Let’s ponder this comparison a moment, just as an appetizer.

A commodity is something that, by definition, can’t be differentiated by quality. So, zinc is zinc, and oxygen is oxygen. They are commodities. Research reports do, however, vary in quality. Therefore, they are not commodities. Bang, off the rails, like I said. They even realize it, but can’t seem to stop writing:

Usually we do not know what information will be most useful (valuable) eventually.

If that’s the case, then it’s not a commodity. If all information were equal, any pound of it would do as well as any other. That’s clearly not the case.

Then the authors go over the bridge and plummet off the trestle, into the river below, in a twisted mass of tortured logic by asserting a “winner’s curse” phenomenon exists in this economic system. This is when someone who wins an auction finds that the actual value of the item is much lower than what they paid, precisely because they just won it and set a value that others discount as soon as it’s pegged. Where exactly “the auction” exists in all this is a bit vague. Is it the journals vying for papers? The authors deliberating where to submit? The readers investing mindshare in the results? Ultimately, the authors assert the latter in a long-winded manner: that consumers of scientific information are dazzled by the big journals’ brands, so reports in those journals cause them — patients, administrators, other researchers — to put too much stock in those reports when similar experiments are obscured and given less credence. Hence, the winner’s curse. To me, this looks like a smoldering trainwreck at the bottom of Logic Canyon.

Coming from PLoS Medicine, such a messy and simplistic set of assertions surprised me (along with the rambling, barely edited text of the article). Medical research differs from bench research. Often, large patient populations have to be recruited, randomized, monitored, and modeled for years on end. Novel findings are highly prized, and definitive studies that won’t be replicated for decades due to their very nature serve as landmarks. The reports that emanate from the best experiments often come from top-tier institutions, funded by the largest sources and led by cream of the crop researchers.

This is the framing error the authors make. They see journals as the cause of research emphasis (a more accurate and less loaded term than “distortion”), when journals are the result of research emphasis. Funding agencies, academic programs, and individual researchers largely set the agenda of scientific inquiry. Patients and their advocates have more voice in setting the research agenda than journals ever will. Want proof?

  • Is AIDS research published more often because journals have set a research agenda? Or because the NIH and others have funded it more aggressively than, say, malaria research?
  • Are autism-vaccine articles still being published in journals because researchers haven’t settled the question, or because sensationalism in the public sphere is driving the agenda?
  • Did “alternative medicine” research begin appearing in journals because the editors believed the hypotheses were interesting, or because politicians funded research under pressure from political action committees?

The tiresome part of this is the obsession with journals that continues to emanate from PLoS. Yes, journals can make money because they attract an audience others also want to reach, or an audience willing to pay for the information. That’s economics. But the forces driving what is researched comes from elsewhere: funding agencies, scientists, private foundations, public politics, and governments. The research agenda is not driven by the economics of information commodities, as if anything more complex than the binary code of 1-0-0-1 could be an information commodity.

Science is distorted to some extent by the human enterprise, there is no doubt. It is, after all, a human endeavor, and suffers some of the foibles humans bring to anything (as well as the virtues). However, to have an obsession with journals to the exclusion of so many other factors in research — scarce funding, scarce talent, and political realities converging in a world of uncertainty offset by dedication, technique, and creative thinking — strikes me as the real source of distortion here.

Reblog this post [with Zemanta]
Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.


5 Thoughts on "Is Science Being Distorted?"

Analogies can help us understand new discoveries or insights — they can also lead us astray. I’m afraid the Young et al. article in PLoS Medicine does more of the latter.

Economies of stuff are defined by scarcity, and there is no scarcity of information. What is scarce is the attention of the reader.

If an analogy is necessary, it may be more useful to think of a two-sided market (authors on one side, and readers on the other) with asymmetric information; that is, authors know more about the true quality (aka value) of an article than the reader.

Since readers don’t know the true value of an article before it has been read, the reader in a market of information overabundance will look for quality signals to alert him/her what is worth reading. The journal (as a quality signal) is perhaps the strongest signal of quality, which is why readers attend to journal brand so strongly.

This is why authors are so interested in having their manuscripts published in journals with such strong brand/quality signals.

At a time when authors can simply put copies of their research on their own web site, it helps to understand what authors gain from the transaction with publishers. It is more than most publisher critics will admit.

Comments are closed.