Portrait of Dmitry Ivanovich Mendeleev wearing...
Image via Wikipedia

A recent New Yorker article entitled “The Truth Wears Off” has garnered a lot of links and attention over the past few weeks. I read the article when it first came out, and I was underwhelmed. But it kept popping up in the Interwebz, and has been linked to by some sources I usually respect. So I decided to take another look.

Luckily, I’ve also been reading a few history books recently, including my current “top of the stack” book, “The Disappearing Spoon,” a generally entertaining history of chemistry via the periodic table. It is providing instructive counterpoint to the New Yorker’s journalistic attempt to postulate the erosion of truth.

The New Yorker article’s dramatic gestures include trappings like “Is there something wrong with the scientific method?” and “It’s as if our facts were losing their truth.” But hints about what the reporter can’t plumb are numerous — statements like “This suggests that the decline effect is actually a decline of illusion” and “Such anomalies demonstrate the slipperiness of empiricism.”

Jonah Lehrer, the reporter in question, claims truth is seeping away in many fields. It’s especially slippery in medicine, and apparently even moreso in psychiatry. These fields are well-known to be hindered by some of the most complicated and subjective science around — patients who aren’t compliant, disease states that are half-described or in some cases preliminary, and comorbidities that create an uneven tapestry upon which to write facts.

Lehrer flirts with the truth of his story, but his inability to nail it is another indictment of what he’s missing — namely, the ability to uncover the theory that unites the facts in question.

Theories are the lifeblood of science. Without them, “Truth” with a capital “T” rarely emerges. Even in a history of chemistry — a field that seems much less susceptible to the recursive instabilities of human subjects and psychiatric reflections — there are many stories of how “truth” as once instantiated and propagated fell apart as simple new insights, questions, or arrangements of facts took over. For instance, rotating the periodic table 90 degrees revealed a set of relationships that had not been seen before. Was this an erosion of truth? Or progress?

The well-known story of Mendeleev‘s theoretical framework versus Lecoq de Boisbaudran‘s experimental empiricism proves instructive. Mendeleev had created a theoretical framework of the elements, predicting where gallium must reside. When de Boisbaudran’s experiments clashed with Mendeleev’s predictions, pegging gallium’s density and weight at slightly different values than predicted, Mendeleev told de Boisbaudran to recheck his numbers.

Lecoq de Boisbaudran did recheck his numbers, and ended up retracting his data, soon publishing corrected results that corroborated Mendeleev’s predictions.

Albert Einstein once said, “It is theory that decides what we can observe.”

Hints of not only weak theories but indulgent theories run throughout the New Yorker article, as in this passage relating to a study that scientists thought proved that female barn swallows preferred mating with males who had long, symmetrical feathers (a side-effect of something they’ve dubbed “fluctuating asymmetry”):

. . . the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results. Stephen Jay Gould referred to this as the “shoehorning” process. “A lot of scientific measurement is really hard,” Simmons told me. “If you’re talking about fluctuating asymmetry, then it’s a matter of minuscule differences between the right and left sides of an animal. It’s millimetres of a tail feather. And so maybe a researcher knows that he’s measuring a good male”—an animal that has successfully mated—“and he knows that it’s supposed to be symmetrical. Well, that act of measurement is going to be vulnerable to all sorts of perception biases. That’s not a cynical statement. That’s just the way human beings work.”

In this case, the raw observations about feathers are generalized first to one species, then attempts are made to generalize them to other species, and soon the attempts start to fail. This isn’t an erosion of truth. It’s a failure of theorizing. The theory was derived after the fact, and to no useful end other than to publish more papers. It wasn’t a hypothesis that was tested, but a data set shoehorned into a post hoc theory.

The scientific method itself is revealing the limitations of initial findings. It’s working. But we’re so geared to create “headline science,” and so wrapped up in ego and pride that we’ve forgotten the humility we need to exhibit before the facts. But most importantly, we may have forgotten that something is even more important than facts — and that is theory.

Enhanced by Zemanta
Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

7 Thoughts on "The Decline Effect Postulate Fails to Find Its Theory"

Kent,
You got me to re-read The New Yorker article this morning I don’t support your critique. Lehrer lays out several theories, all of which may work together to explain the “decline effect”:

1. reporting bias
2. publication bias
3. various psychological and social biases that explain the social nature of scientists and their institutions.

He does use words like “truth” and “prove” — words that most empirical scientists are nervous using since one is fundamentally incapable of knowing truth or proving anything (outside of the realm of mathematics where “proof” has a narrow meaning).

Substituting “truth” with “dominant theory/paradigm” and “prove” with “support/demonstrate” would essentially keep the meaning of Lehrer’s argument, although it would remove the words with which you seem to take issue.

Lehrer brushes past all those things you mention, but could have just arrived at what I think is the really fundamental point — weak theoretical grounds are driving weak tea “truths.” Instead, we’re presented with this sophistry about eroding truth and a failing scientific method. A much better article could have been made of the same points, but with a better theoretical framework — i.e., the pressure to publish, the pressure to produce, and the pressures to maintain funding are driving scientists to dress up facts as theories to such a degree that we’re drowning in weak tea.

If the New Yorker article really talks as though there is some unitary “scientific method,” then it is sadly out of date in terms of where current philosophy of science is. The idea that there is such a thing as THE scientific method was a dogma of scientific positivism, and that has not been the way philosophers have thought about this topic in a very long while. See, e.g., Richard Miller’s “Fact and Method” (Princeton, 1987): http://books.google.com/books?id=UKwN_B4HzisC&printsec=frontcover&dq=Fact+and+Method,+Richard+Miller&source=bl&ots=-qeGyaSpX6&sig=tYqJN1Tv0GvMjz3HSahqaE1qsME&hl=en&ei=btUcTb2WAoH98AbwxuXbDQ&sa=X&oi=book_result&ct=result&resnum=1&ved=0CBwQ6AEwAA#v=onepage&q&f=false

I liked the piece – it was nicely written. But it’s nothing shocking.

Personally, I thought if you’re going to name-check Kuhn and Popper, then it’d be worth checking with a couple of sociologists and philosophers of science to see what else has been done since 1962. E.g. Changing order: replication and induction in scientific practice (Harry M. Collins, 1985).

I don’t think that devalues the piece though, it caused discussion of the issue in places which would normally ignore it. It’s rare Collins (1985) does that.

Another rich vein unexplored is the sloppy use of statistics in many fields, particularly medical research, that vastly over-identify “significant” results. The NYT recently had an ok overview of some of these issues, in part with regard to the recent notorious ESP paper. Leher touches on this as part of bias toward positive results, but he doesn’t (if I remember correctly) go into the issue that a lot of medical (and other) research contains inappropriate usage or interpretation of statistics.

Comments are closed.