Is Google Making Us Stupid?
Image via Wikipedia

In the spring of 2008, Nicholas Carr wrote an article published in the Atlantic entitled, “Is Google Making Us Stupid?” It ignited a firestorm of responses at the time, including here at the Scholarly Kitchen. Now, Carr is back with an interesting, brief post examining how we define our own role in what we know, and the problems that can arise when we abdicate our role in knowledge, even to a small extent, through dependence on an external source.

Carr’s points are:

  1. the Internet has a lot of information, but not all information
  2. truth and knowledge are different than information
  3. deferring to pre-existing information because it’s available online is essentially becoming computerized

Carr quite rightly objects to the process of dehumanizing knowledge and truth by turning them over to a system of information:

It’s not what you can find out . . . it’s what you know. Truth is self-created through labor, through the hard, inefficient, unscripted work of the mind, through the indirection of dream and reverie. What matters is what cannot be rendered as code. Google can give you everything but meaning.

Yet, as the age of systems emerges, we might be seduced into thinking that systems of information can become effective substitutes for truth or knowledge — even if these systems are incomplete, non-contextual, non-specific, and situationally insensitive. As medicine has learned over its decades of serious advances, population-based interventions can only go so far — individualized treatments are instrumental to delivering effective care. Each patient holds his or her own truth, so to speak, and the wise physician doesn’t simply follow the guidelines, but uses information as a starting point for judgment.

Interestingly, I also came across a paper I think is related. It’s a recent economic analysis of Comparative Effectiveness Research (CER), a hot topic among policy wonks in healthcare. Why is this paper related? Because it addresses what can happen when homogeneous information is applied to a heterogeneous population, and systematized into an enforcement paradigm.

While CER is trumpeted as a way to eliminate less-effective treatments from medicine, thereby saving money and improving patient care, the economists who researched the topic calculated that the actual effects of homogeneous approaches to a heterogeneous population would lead to a net increase in costs and an overall degradation of patient care.

The authors focused on antipsychotic drugs since they represent one of the largest drug classes and one with heterogeneity in its patient population. The researchers are not alone in their interest here — CER advocates have also argued that antipsychotic utilization is ripe for improvement through CER approaches.

Essentially, the authors find that while CER could save 90% of spending on this drug class, the cost of doing so through lost quality of life and its effects would amount to an expense equivalent to 98% of that spending, for a net loss of 8% (or about $110 million per year).

Medicine seems to be repeating an age-old duality, just under different rubrics — is it art or science?

It’s a duality physicians play out everyday. Why does it resonate with me? What have I seen with my own eyes and mental labor, what truth about this do I possess?

I saw this duality acted out myself a couple of years ago as two physicians looked at an anomalous test from a routine exam — of me. If the result had been placed in the framework of an external guideline, I would have ended up in surgery with a life-altering outcome. But the resident physician, who was grounded enough to take me, the patient sitting right in front of him, into account (young, healthy, asymptomatic, active), questioned the interpretation embraced by his guideline-loving attending. Their disagreement escalated, and they left the room to resolve the “discussion.” The attending prevailed, and recommended surgery. In addition to doing a ton of research on the topic and forming my own opinion, I asked for a second opinion from a senior clinician. The second-opinion physician caught a major error in the interpretation of the test, and I was off the hook.

Why did this even happen? Because one physician refused to believe what he was seeing with his own eyes. He only fed data into a system and accepted the path it set for him.

He’d become computerized.

Someone once argued with me about being careful not to take too seriously a story I found compelling, stating as a rebuke, “An anecdote isn’t data.” True, but perhaps anecdotes are just as important at the level of knowledge and truth.

Perhaps data are only useful under certain conditions, like those we encounter between truths.

How will we ever know?

Why don’t you check it on Google? Or do you know it to be true?

Reblog this post [with Zemanta]
Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

9 Thoughts on "Is Google Making Us Stupid, Part II: Perhaps, If We Don’t Seek Truth Over Information"

This is similar to the bugbear that arises when people say (along the lines of) “science is facts”. It’s not. Science is a process, it’s how we get those facts.

Similarly, I think, with knowledge. Google has the data, but only intelligent, reasoning beings can use them in a synthesis.

Great post Kent, and I agree with rpg above.

On a related note, much faith is being placed in the potential of data mining and semantic searches for generation of new knowledge. But as anyone who has examined such machine-generated results knows, they throw up some startling false positives and negatives – which only an ‘uncomputerized’ mind can discriminate.

Of course, these technologies will improve, but we should not forget that, as you say, “only intelligent, reasoning beings” will be able critically to assess the data generated and apply them (or not) effectively.

I’d always heard the phrase as: The plural of “anecdote” is not “data”.

But I wanted to pass along this quote from Brian Eno on the subject:

I notice that the idea of ‘expert’ has changed. An expert used to be ‘somebody with access to special information’. Now, since so much information is equally available to everyone, the idea of ‘expert’ becomes ‘somebody with a better way of interpreting’. Judgement has replaced access.

Wonderful post, Kent. It resonates with Joe Esposito’s recent post in praise of reckless enthusiasm — which requires that you not think you already have all the answers (or can get them). In my reply to his post, I pointed out the importance of two things: attention and humility. Yes, pay attention to all the data you can; but don’t think that this “automates” the answer. In light of your post, I’d point out that the result of attention and humility should be JUDGMENT. Can we make better judgments because of what Google and the Internet have given us? Sure! But let’s remember the “messadata” that’s out there. Google isn’t God. The now famous Language Log post “Google Books: A Metadata Train Wreck” is amusing and eye-opening on that subject. (One should be sure to read the thoughtful comments on that post, including those from Google itself, which clarify how pervasive the messadata is.) — Bill Kasdorf

An interesting piece with a similar theme, dealing with the enormous amount of data piling up in the life sciences, can be found here.

Thanks, David.

Data requires humans to provide context and become information. Knowledge is the ability to take an action. It is created when information is transformed by human social interaction.

Wisdom is the ability to make the RIGHT decision. As the anecdote relates, the doctors had information and created knowledge, but wisdom required more than just their input.

The mistake that is often made is to take data directly to knowledge in order to make a decision, skipping the need to human intervention. In fact, many people incorrectly view this as a feature not a bug. This is not really a route to wisdom.

Google helps us find the data faster but does not negate the need for substantial human intervention to arrive at useful knowledge.

Luckily, the advent of Google also means that we can add our own set of knowledge to the cycle in order to create a wise decision. Google can help the patient deal with the knowledge of the doctors to find a wiser path to a useful decision.

But those that think we are now at the point to shorten this cycle will be greatly surprised.

Comments are closed.