In the spring of 2008, Nicholas Carr wrote an article published in the Atlantic entitled, “Is Google Making Us Stupid?” It ignited a firestorm of responses at the time, including here at the Scholarly Kitchen. Now, Carr is back with an interesting, brief post examining how we define our own role in what we know, and the problems that can arise when we abdicate our role in knowledge, even to a small extent, through dependence on an external source.
Carr’s points are:
- the Internet has a lot of information, but not all information
- truth and knowledge are different than information
- deferring to pre-existing information because it’s available online is essentially becoming computerized
Carr quite rightly objects to the process of dehumanizing knowledge and truth by turning them over to a system of information:
It’s not what you can find out . . . it’s what you know. Truth is self-created through labor, through the hard, inefficient, unscripted work of the mind, through the indirection of dream and reverie. What matters is what cannot be rendered as code. Google can give you everything but meaning.
Yet, as the age of systems emerges, we might be seduced into thinking that systems of information can become effective substitutes for truth or knowledge — even if these systems are incomplete, non-contextual, non-specific, and situationally insensitive. As medicine has learned over its decades of serious advances, population-based interventions can only go so far — individualized treatments are instrumental to delivering effective care. Each patient holds his or her own truth, so to speak, and the wise physician doesn’t simply follow the guidelines, but uses information as a starting point for judgment.
Interestingly, I also came across a paper I think is related. It’s a recent economic analysis of Comparative Effectiveness Research (CER), a hot topic among policy wonks in healthcare. Why is this paper related? Because it addresses what can happen when homogeneous information is applied to a heterogeneous population, and systematized into an enforcement paradigm.
While CER is trumpeted as a way to eliminate less-effective treatments from medicine, thereby saving money and improving patient care, the economists who researched the topic calculated that the actual effects of homogeneous approaches to a heterogeneous population would lead to a net increase in costs and an overall degradation of patient care.
The authors focused on antipsychotic drugs since they represent one of the largest drug classes and one with heterogeneity in its patient population. The researchers are not alone in their interest here — CER advocates have also argued that antipsychotic utilization is ripe for improvement through CER approaches.
Essentially, the authors find that while CER could save 90% of spending on this drug class, the cost of doing so through lost quality of life and its effects would amount to an expense equivalent to 98% of that spending, for a net loss of 8% (or about $110 million per year).
Medicine seems to be repeating an age-old duality, just under different rubrics — is it art or science?
It’s a duality physicians play out everyday. Why does it resonate with me? What have I seen with my own eyes and mental labor, what truth about this do I possess?
I saw this duality acted out myself a couple of years ago as two physicians looked at an anomalous test from a routine exam — of me. If the result had been placed in the framework of an external guideline, I would have ended up in surgery with a life-altering outcome. But the resident physician, who was grounded enough to take me, the patient sitting right in front of him, into account (young, healthy, asymptomatic, active), questioned the interpretation embraced by his guideline-loving attending. Their disagreement escalated, and they left the room to resolve the “discussion.” The attending prevailed, and recommended surgery. In addition to doing a ton of research on the topic and forming my own opinion, I asked for a second opinion from a senior clinician. The second-opinion physician caught a major error in the interpretation of the test, and I was off the hook.
Why did this even happen? Because one physician refused to believe what he was seeing with his own eyes. He only fed data into a system and accepted the path it set for him.
He’d become computerized.
Someone once argued with me about being careful not to take too seriously a story I found compelling, stating as a rebuke, “An anecdote isn’t data.” True, but perhaps anecdotes are just as important at the level of knowledge and truth.
Perhaps data are only useful under certain conditions, like those we encounter between truths.
How will we ever know?
Why don’t you check it on Google? Or do you know it to be true?