Ibn al-Haytham, a 10th century philosopher and an early voice defining what would become the scientific method, is quoted as saying:
Truth is sought for its own sake. And those who are engaged upon the quest for anything for its own sake are not interested in other things.
Yet science today seems increasingly interested in “other things,” from academic advancement to financial rewards. And the scientific publishing process seems more and more geared to abetting these practices as the number and capacity of outlets has exploded over the past decade.
Meanwhile, we continue to encounter scandals and shortcomings. A recent case involving a Dutch researcher named Diederick Stapel will likely lead to dozens of retractions. The first domino fell recently in Science. Stapel has relinquished his PhD in light of his misbehavior. Obviously, in the risk-reward mentality likely at work, Stapel felt the risks of being caught were outweighed by the rewards of publication and citation.
A recent commentary in the Chronicle of Higher Education seeks to put a happy spin on the situation. Entitled, “Despite Occasional Scandals, Science Can Police Itself,” I’m not sure I accept the author’s argument, which boils down to the fact that despite dozens of fraudulent papers being in the literature for years, the perpetrator was finally caught; therefore, science can police itself.
There are a few problems with this cheerful line of thought. First is the obvious “we don’t know what we don’t know.” How many papers in the literature are currently packed with explosives, waiting to detonate under our noses? It’s hard to tell. And aside from the scandalous, there are the more pernicious but individually less harmful uninteresting, unread, and uncited papers, turning the stepping stones of science into something more akin to an intellectual swampland. It’s hard for the police to succeed when they’re up to their waists in sludge.
It seems malfeasance is uncovered by a tip from someone close to a perpetrator. It’s not as if science policed itself — rather, someone fed up with a cheater’s charade, and success perpetrating it, finally blows the whistle, a major journal investigates, and months later, there is a retraction. Humans police humans, the same as if a drug kingpin had been narced out by a crony. There’s usually nothing noble about how “science” polices itself. Baser human motivations feed the fraud, and ultimately they tip off authorities.
Exaggeration is another form of careerism — hardening hints and shadows into declarative certainties, all for the sake of a higher impact publication event. This plays on the desire to believe that scientific publishing yields “truth,” which seems fairly prevalent. At the recent STM Innovations meeting in London, a semantic specialist decried the tendency of semantic vendors and text miners to say that their algorithms can reveal factual statements. Her far more modest and correct interpretation is that these techniques can identify claims. And a claim or assertion is far from a fact.
How far claims can be from the truth was the subject of a recent article in the Wall Street Journal [subscription required]. Entitled, “Scientists’ Elusive Goal: Reproducing Study Results,” the article details how scientists at various companies are finding it difficult to reproduce results published in the literature, wasting time and money. Apparently, this is a dirty little secret coming to light:
“I was disappointed but not surprised,” says Glenn Begley, vice president of research at Amgen of Thousand Oaks, Calif. “More often than not, we are unable to reproduce findings” published by researchers in journals.
The article notes that the preference for positive findings, the pressures to generate grant funding, and the “publish or perish” mentality leads some researchers to cherry-pick results. The number of available outlets also makes publication a high-likelihood event.
Amgen isn’t the only company dealing with a soft scientific literature:
In September, Bayer published a study describing how it had halted nearly two-thirds of its early drug target projects because in-house experiments failed to match claims made in the literature. The German pharmaceutical company says that none of the claims it attempted to validate were in papers that had been retracted or were suspected of being flawed. Yet, even the data in the most prestigious journals couldn’t be confirmed.
The lesson once again is that science is done by humans, and prone to the failings of its practitioners and their institutionalized practices. A comment by a Wall Street Journal subscriber, David Eyke, is quite illuminating:
Modern management science tells us that if you want more of something, all you have to do is measure it. . . . The measurement alone – even if you do nothing else such as attach consequences to the values produced by the measurement – helps to massively improve the value of the output. . . . We aren’t measuring is the REPLICATION rate of scientific work by scientists. We aren’t measuring it, nor are we publishing it widely. In other words, we tell the scientific community that we ignore their poor efforts and wasted research dollars. . . . This is because we are not measuring it. And management science tells us that if you don’t measure, you are going to waste a whole lot of money.
Perhaps we’re measuring the wrong things — number of publications, number of citations, impact factors of publication outlets — as a way of measuring a scientist’s productivity, which we then reward with money, either directly or indirectly. Perhaps we should measure how many results have been replicated. Without that, we are pursuing a cacophony of claims, not cultivating a world of harmonious truths.