States that receive more research funding publish more papers, and publishing more papers means publishing more positive (described as “biased”) results. If you follow the argument that productive science is “biased” science, then you’ll appreciate a new article published in PLoSONE last week.
The paper, “Do Pressures to Publish Increase Scientists’ Bias? An Empirical Support from US States Data,” by Daniele Fanelli, is a great example of a paper that may be technically fine, but fails miserably on theoretical grounds, and the methodology reveals it.
The author defines a scientist’s “research environment” as a geographical state — not an institution (where you’d imagine that most variation exists) — and examines the relationship between the percentage of papers reporting positive results with state productivity and expenditures in R&D.
This is like saying that New York State has a research environment, when in fact you’ll find disparate research environments depending on whether you are a scientist at Cornell University or SUNY Cortland, for example. Luckily, the author reveals the weakness of his hypothesis in the paper:
Although the confounding effect of institutions’ prestige could not be excluded (researchers in the more productive universities could be the most clever and successful in their experiments), these results support the hypothesis that competitive academic environments increase not only scientists’ productivity but also their bias.
Not only does the author exclude the differences in research cultures across institutions, but quality differences across journals. Higher-quality journals are much more selective of what they publish and many will only accept papers that advance their field — this means positive results.
In other words, the author ignores the nature of academia: that better scientists go to better institutions with stronger research cultures, receive more federal research funds, do better science, publish papers in better journals, and so on.
This doesn’t sound like “bias” in the negative sense.
Most ground-breaking scientific research in the United States is conducted within a small number of elite institutions of higher education. This is not an indication that the system is biased or flawed or broken — it is an indication that the system is working by concentrating talented individuals with resources in places where they can achieve more than by working alone. This is the social stratification of science at work.
Ultimately, “publication bias” is a paranoid topic. It assumes that there is an intent to distort what is known about a topic, and it can happen at different levels. At the funding level, a pharmaceutical company may condition the release of results based on whether the data support the efficacy of a new drug. Researchers, themselves, may self-censure, holding back negative results (or at least those that do not support a widely-held theory) under pressure to publish their work. Some editors may also favor manuscripts with positive results, deeming manuscripts with negative results just plain uninteresting and therefore not worthy of publishing. I’d like to think that these editors are doing what is good for science, and not practicing “publication bias.”
The fundamental problem with the PLoSONE paper is not that it reports a positive association between publication output and positive results. Its problem is that it states its hypothesis backwards, assuming no geographic variation in results and then being shocked to discover that it has revealed “publication bias” without first ruling out higher-quality research as an explanation.
By twisting ordinary, uninteresting negative results into positive results, this is a good example of the very publication bias the author is attempting to illustrate.