Sigmund Freud, founder of psychoanalysis, smok...
Image via Wikipedia

Over the past few years, we’ve been witness to a parade of partisans in the debate over whether the Internet is making us smarter and more capable or turning us into shallow and superficial information parasites.

Nicholas Carr carries the most water for this argument, but others have joined in. Usually, their arguments that we’re going too far, becoming too fragmented, or becoming distracted are positioned to seem as if they have our best interests at heart — concern for our minds, our families, our communities, our culture.

Adam Gopnik, writing recently in the New Yorker, breaks down the more typical partisans in the following manner:

. . . the Never-Betters, the Better-Nevers, and the Ever-Wasers. The Never-Betters believe that we’re on the brink of a new utopia, where information will be free and democratic. . . . The Better-Nevers think that we would have been better off if the whole thing had never happened, that . . . books and magazines create private space for minds in ways that twenty-second bursts of information don’t. The Ever-Wasers insist that at any moment in modernity something like this is going on, and that a new way of organizing data and connecting users is always thrilling to some and chilling to others.

A recent post by Jeff Jarvis puts what he calls “the distraction trope” into perspective. Instead of worrying about whether our brains, families, or communities are changing, Jarvis strips away that sophistry and lays bare something more primal that seems to be at stake:

And isn’t really their fear . . . that they are being replaced? Control in culture is shifting. We triumphalists — I don’t think I am one but, what the hell, I’ll don the uniform — argue that these tools unlock some potential in us, help us do what we want to do and better. The catastrophists are saying that we can be easily led astray to do stupid things and become stupid. One is an argument of enablement. One is an argument of enslavement. Which reveals more respect for humanity? That is the real dividing line. I start with faith in my fellow man and woman. The catastrophists start with little or none.

Throughout history, this fear of losing control has been consistently masked as concerns for higher, even altruistic interests. Jarvis quotes Erasmus (via Elizabeth Eisenstein’s new book, “Divine Art, Infernal Machine“), who said during the proliferation of books:

To what corner of the world do they not fly, these swarms of new books? . . . the very multitude of them is hurting scholarship, because it creates a glut, and even in good things satiety is most harmful. [The minds of men,] flighty and curious of anything new [are lured] away from the study of old authors.

Erasmus was worried about losing control over a world he’d mastered through his knowledge of old authors and stable cultural touchstones, and Carr is worried about losing control over a way of studying and thinking and processing information he’s become adept with. These are not the political leaders of the Middle East who are concerned about destabilization at an entirely different level (but for some of the same basic reasons, and from some of the same fundamental causes). Control has a softer side than anything we’d associate with authoritarianism.

Control can be channeled from competence and tradition. Change threatens both of these.

Competence is a reassuring zone, but when the boundaries change and your competence is no longer relevant, there are two choices — change, or fight to keep the status quo.

Tradition allows status symbols and status signals to exist. Mention that you went to a Bach symphony in a high-tradition culture, and it’s meaningful. Mention it in a high-change culture, and you’re potentially tagged as a stuffed shirt.

The battle for the information landscape has also been one about losing or gaining control, about battles over change versus the status quo.

It’s vital to note here that the Internet is different than any other communications medium precisely because it’s the first that nobody controls, thanks for packet switching, TCP/IP, and other architectural marvels at its center. It’s tradition is change. Competence with it demands change.

The people who first envisioned a network of interlinked computers — an internetwork — also imagined that this marriage of computing capabilities and human capabilities would change human thinking abilities and habits because the approach decentralized knowledge and created a new path forward. As J.C.R. Licklider wrote in 1960:

It seems reasonable to envision . . . a “thinking center” that will incorporate the functions of present-day libraries together with anticipated advances in information storage and retrieval. . . . The picture readily enlarges itself into a network of such centers, connected to one another by wide-band communication lines and to individual users by leased-wire services. In such a system, the speed of the computers would be balanced, and the cost of the gigantic memories and the sophisticated programs would be divided by the number of users.

But Licklider knew this would also change how we think:

The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought.

So, the visionaries of a new technology hoped it would change how we think. But not only change it, but improve it. He followed this up in 1968 with a sentence that anticipates why you’re reading this:

In a few years, men will be able to communicate more effectively through a machine than face to face.

The stories of Licklider, Douglas Englebart, Paul Baran, and others are recounted wonderfully in the book, “The Master Switch: The Rise and Fall of Information Empires,” by Tim Wu. It’s a book I can’t recommend too highly (and I want to thank Bob Kelly for suggesting it to me). It outlines how in film, phone, and television the power of “the Cycle” of creative destruction and centralization/decentralization is assisted or retarded by government action or inaction, by immovable personalities, by corporate power, and by the pursuit of money.

The clear message from Licklider and others in the 1950s and 1960s was that the centralized knowledge system created by the powers behind AT&T’s telephone monopoly, the studio system in Hollywood, the major radio networks (NBC and CBS), and by radio extending its influence over early television (with broadcast television later trying to stifle cable television) all created an illusion of unity in the United States while stifling innovation. The authoritarian mode of thinking — whether driven by a U.S government that was, in retrospect, surprisingly oppressive; corporate monopolies that sought to perpetuate themselves; or by social scolds who enjoyed the perch atop a centralized cultural replication machine — needed to change.

The anachronists who wish to return to things like “deep contemplation,” “classic texts,” and the like seem to hearken back to a centralized information economy, one which had some virtues, but virtues that were not justified by the expense created in innovation, personal liberty, self definition, diversity, or choice.

It’s not that one is all good and one is all bad. There is a trade-off, an elusive balance, a mix of benefits and traits. In writing that seems prescient to both the pros and cons of humanity’s continuing exploration of its boundaries, Sigmund Freud once wrote:

Man has, as it were, become a kind of prosthetic god. When he puts on all his auxiliary organs he is truly magnificent; but those organs have not grown on to him and they still give him much trouble at times.

We may argue again and again whether the Internet is changing our brains, elevating us, lowering us, making us smarter, or making us stupid. But at the end of the day, it seems the real argument is about control — who has it, who shares it, and who wants it.

So, despite all the partisans, sophistry, and essays about our brains, our culture, our souls, it’s important to remember that what we’re really arguing about is control.

Sometimes, a cigar is just a cigar.

Enhanced by Zemanta
Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.


24 Thoughts on "The Battle for Control — What People Who Worry About the Internet Are Really Worried About"

Great analysis. It is no accident that the place where change is most apparent is in the political arena, the very realm of control. Here decentral channels like the blogosphere, twitter, etc., are wreaking relative havoc with traditional channels and structures.

Very interesting (and cynical) post and I’m looking forward to reading The Master Switch. I agree with Jarvis; “I start with faith in my fellow man and woman.” However, that faith leads me to a different conclusion than the one you begin with, namely, that concern about the internet’s effects in based entirely in the desire to control. I don’t deny that control is a motivation for some of the players, but I think it’s too simple a generalization to say that everyone who is thinking, writing, speaking about the impact of the internet on our culture is motivated by control. As soon as you label it as such you can easily dismiss it, and that’s too bad. We need to listen to all sides of the dialogue so we can make informed choices as scholars, citizens, consumers, parents, writers, educators, etc. I think we have an incredible opportunity to exploit a new information age that will tap into an amazing wealth of previously untapped and inaccessible knowledge. I also think we need to be thoughtful about what we’re willing to compromise in the process.

There is a balance here, but much of the hand-waving about “concern” isn’t aimed at making the right choices and decisions but instead about panic and worry and fear. Are our brains changing? Are we making ourselves stupid?

When these concerns have been mirrored in the past at junctures at which information proliferation occurred, you begin to see a pattern. It’s not about the best choices, but about control and comfort.

Are our brains changing? Are we making ourselves stupid? Why are these questions that reflect panic, worry and fear? I see these as legitimate lines of inquiry. If we can ask these kinds of questions about things like alcohol or substance use, or the way in which we educate children, or the effects of greenhouse gases, why can’t we ask these questions about technology that is clearly changing our culture to an unprecedented level? I worry about a world in which we don’t ask these kinds of questions, but simply accept without challenging.

Really? You don’t see this as fear-mongering at a low level? We can ask questions without instigating fear. How about: Do students who play historically accurate video games score better on tests than their text-based peers? That question has been answered, and there is no fear in the question, which is much different than inferring insidious forces in our brains or a deprecation of our intelligence.

Well, that’s a very different and specific question. And certainly not a question I would oppose or denigrate as invalid. The questions I’m interested in are in fact about the potential evolutionary impact of technology on human thought and the advancement of knowledge. So if your critique is related to the semantics then I think the first example is still good, i.e are our brains changing? In my view this is clearly not fear mongering but the basis of valid scientific inquiry. As for the second question, the revised version would be something like “Are information ubiquity and technology affecting our ability to synthesize information and think independently to create new knowledge?”

“Tradition allows status symbols and status signals to exist. Mention that you went to a Bach symphony in a high-tradition culture, and it’s meaningful.”

Telling someone in a high-tradition culture that you went to Bach symphony is meaningful only in the sense that it shows that you’re lying — Bach didn’t write any symphonies.

As for your argument as a whole, I agree that the control issue is one reason why some people — particularly those in power — “worry abut the Internet.” But for other folks there are a host of other concerns that you don’t touch on. The same technology that allows information to be distributed more widely, quickly, and easily than ever also allows misinformation to be distributed widely, quickly and easily — there’s a reason Snopes exists! And then there are safety concerns. A member of my family who takes great care to keep personal data secure was nonetheless a victim of identify theft recently, and last year a pedophile in my area used a social networking site to try to lure a 10-year-old for a meeting. Of course, these and other problems I can mention also exist in the “real” world, but the Internet facilitates them, or at least adds a disturbing new wrinkle.

I’m not saying that these issues outweigh the remarkable benefits that the Internet has brought us — but it’s these issues, rather than control, that “worry” the people I know, particularly older ones.

You are right but the battle for control won’t be in any of the places you or anybody in this article have named.

What many are concerned about, especially those in positions of authority is, “the deconstruction of the illusion”, being brought about by this sudden, organic, explosion of information.

We, humans are learning to trust, that liars will lie, cheaters will cheat, thieves will steal and killers will kill…. They, the established authorities, are afraid that they are going to have to trust us too….

It’s frightening and it causes then to loose sleep at night…

We are the ones who teach your children, clean your houses, cook and serve your food, drive your cabs, build your houses, protect you at night and put out your fires.

Best regards,


Be careful about embracing the internet as the great equalizer. While it has done a lot to *broaden* our abilities to reach out, there are still a lot of central controls.

Wikipedia is the de facto authority online, arguably because “anyone can edit it,” but this is a lie. Go add a page about yourself and see.

How will people find you? Through Google? Single point of failure, single point of control.

DNS is also a single point of control – we’ve seen that with the Department of Justice removing DNS entries for “suspected terrorist sites” or something.

So if we want the internet to be truly an open sea of communications, we’ll have to work to keep it that way. Price of liberty, etc.

Among the “Never-Betters, the Better-Nevers, and the Ever-Wasers” I collectively yawn as one of the “What-Evers”. The system and range of human cognition and interactions is too complex to lump everyone’s experience to be that of Carr or anyone else.

I’d also add a dose of Churchill – “I am an optimist. It does not seem too much use being anything else.”

Great Churchill quote. I think I’ll use that one a lot going forward. Thanks.

So you’re saying Nicholas Carr is too stupid to adapt to the future, or, to use your words, has a type of competence “which is no longer relevant”?
Is it not possible that Carr, Zittrain, Jackson, and similar thinkers are fully competent users of technology, who, as thoughtful professionals and aware citizens, raise questions (and yes, voice objections) when there is reason to do so?
No, they must be control-fetishists who have some fundamental agreements with the topical bogeymen of the present, or, as you put it, criticize the Internet “for some of the same basic reasons, and from some of the same fundamental causes.” (which was a cute slander–cheap, but cute).

I suppose it’s a good thing that only liberty-loving, innovative types have access to the Internet. It could never, ever be used to stifle liberty or intimidate free thought (Patriot Act? AT&T?).

The free-market paradise of independent capitalists on equal footing on the Internet has also come to pass. I don’t know what is…or Google…or iTunes.

No, I’m saying Carr and others are trying to scare us about the future and changes that the inventors of it intended, and they are doing so because they want to remain closer to the status quo in which they’ve thrived, which results in wanting to exert some control in order to reduce the risk that their skills are marginalized in an uncertain future.

Their specific objections may have merit, but they have to be balanced against the benefits others reap, and the potential that’s being unlocked. And because the Internet is not a controlled monopolizable technology, partisans have to use rhetoric, logic, commerce, or other forces to control it. As for Amazon, Google, or iTunes, they are using commerce that emanates from the Internet, but they are not changing the underlying technology (TCP/IP, etc.).

If you reduce the concerns down to nothing but control, how is the internet not under control? Maybe to those savvy enough to understand it and meld it, who are themselves a new elite who will exact the same authoritarian control as At&t, feel free in it. But they are a privileged few, who are just as concerned with their position of control as they spread the gospel of the internet. Heavy internet users are disconnected from the rest of the world, who are for the most part not connected, not literate in these technologies and are systematically excluded. Information on the internet is all encoded in a language unreadable to nearly all humans unless they have an expensive electronic machine to read it. How is that information liberation? I have faith in the goodness of man, and his or her ability to navigate and control their destiny in this time of information tumult. However, I don’t have any more affection or trust in the elite of the new information systems than I did in the old. If Carr is trying to maintain a status quo that he thrived in, than your promoting another that you’ll thrive in. And I think ignoring the concerns cited in that New Yorker article, which were changing use patterns and information demands, are very legitimate concerns. If the internet’s journalism standards are that being pushed by content farms, no articles over 800 words, journalism is doomed.

OK, let’s try to take these one at a time:

The Internet is not under control because it is a technology no company owns. It is very different in that regard. Telephone companies owned telephone infrastructure. Radio companies owned radio infrastructure. Television companies owned their vertical. Nobody owns the Internet, so it’s open to innovation based on fewer constraints than a company vertical would impose. Far fewer.

The privileged few? Most people own cell phones that can communicate (via SMS or Web) with the Internet, and this installed base is huge and growing. You might as well complain about writing and the “privileged few” who can enjoy literacy, and those who spread the gospel of reading. Are readers “disconnected from the rest of the world” or more aware of it? Yes, there are disparities we all wish could vanish, but the causes of those are multifactorial, and the truth is that the Internet (and communication technology improvements in general) have been responsible for many improvements in educational attainment, awareness, literacy, and empowerment.

I’m not promoting a status quo that I’ll thrive in, but pointing out that retaining a status quo is what Carr and others are asking us to do, possibly for their own comfort. In fact, I state explicitly in the post that change is inherent on the Internet, so if you’re competent with it, you’re competent with change. Are you arguing that change is a form of status quo?

The New Yorker article was, to me, essentially descriptive and rambling, with few insights. There were no clear concerns listed, as I recall.

The Internet produces a lot of dreck, but also a lot of great journalism. In fact, things happen on the Internet often directly and first. It’s the mass media that today gets things terribly wrong, is distorted, is an echo chamber, is polarizing, and is fact-challenged.

Some of your other points I can’t respond to because I’m afraid I don’t quite understand them.

Lots of things are really about control, and often people aren’t too happy to see it because it’s uncomfortable. Obviously so when you’re at the right end of the gun, but also when you’re at the wrong end. You’d rather just ignore it, life is easier then (for the time being).

But I think you don’t make enough of the important distinction between those who benefit from control and those who don’t.

When criticisms come from the first group, all your points are on target. When they come from the victims, so to speak, the objections might still be bogus because they’re trying to stay in denial. But they might also not be. In that case, it’s vital to listen. The ones who bleed out on the bleeding edge can tell us something about what’s in store for us if we continue on the same path.

Another aspect I think you may skip over a bit lightly is the loss of skills. That isn’t the first time it’s happened. In pre-literate societies, a few people with the aptitude memorize the equivalent of whole books. After the invention of writing, very few people bother. Is that good or bad? It doesn’t matter because we don’t care, but part of me wonders what we’d be capable of if we still went to the trouble of training that kind of mental capability.

The problem with losing something is after a while you don’t know what you’ve lost. We may be going through that process right now. Or we may not. But it seems foolish not to stop and examine it. There might be a baby in all that bathwater.

Loss of skills often means changing skills. Those new skills are more useful than the old ones. How helpful is it to science, the arts, and human progress if we’re constantly retelling the same stories to maintain a reservoir of cultural knowledge? Instead, let’s write them down, make lots of copies, and turn our attention to other things, which we can also write down, share through lots of copies, and start a virtuous cycle of the same, so that soon we have millions of stories we can share and experience and enjoy rather than a paltry number we have to mumble repeatedly to hold onto and pass down. In the meantime, we gain skills of inquiry, organization, craft, and categorization we never would have acquired in the old paradigm.

Examining things is appropriate. Writing fear-tinged diatribes to scare people away from new and useful technologies is a time-tested method of trying to exert cultural control while not addressing the possibilities.


A very insightful post. Let me give you a friendly “push.”

I think you may be writing about “control” in far too explicit of a way. There’s no doubt that we need to think about visible government/corporate interventions (Kill switches or Lessig’s concerns about Copyright/IP). Though keeping information about interventions public will always be a struggle (thnx EFF and others), as long as it stays public it can be dealt with.

What concerns me more is the im-mediated (versus the immediate) forms of control. In other words what aspects are hidden or have we been trained not to see. Two example in particular: (1) Encoded “Ideologies” and (2) Unknowable Algorythms

Quick disclaimer: I am not worried about vast conspiracies/cabals to steal privacy, etc. Nor is this an argument that we’re somehow becoming less human/disconnected (you’ve tackled that one above … though one place where part of the disconnected argument needs to be seriously considered is in remote/drone warfare, but that’s too far afield for now)

By encoded ideologies: I mean the ways in which ideas and values are often unconsciously encoded into the programs/UIs themselves. All too often, we are not conscious about how the machine is working us (as we’re working the machine). This can be as simple as arguing that a binary/digital logic (off/on) infuses much of computer experiences (for example that for most of its existence, that the only two states of relation on facebooks were friend/not-friend).

This shouldn’t be read as an alarmist call (Marcel Mauss, among others, points out that “culture” has been shaping our bodies for millennia), and past tools have done this as well (following Beatrice Warde, the value of “transparent” information has been with us since Gutenberg).

What is a bit disconcerting is the way that specific ideas about “transparency” (in both UI and interaction) is first, that this is assumed to be a universally good thing, and secondly, that it ends up having profound effects on the actions we can take.

As it becomes increasingly easy to share things, and as systems become more networked, we really need to think about how to critically evaluate what is being lost and gained (you’ve in part addressed this above). But part of the problem is that typically we don’t clearly see this as its happening.

Wrestling with integration and change does happen over time (something you teased out above – and Joshua Meyrowitz’s “No Sense of Place” is a seminal read on that subject) – but the more we can do to become aware of these shifts so that (as cultures/subcultures) we can work through them, the better. Plus there’s still a lot of things we have utter blindspots too.

This leads to the second point of control that concerns me: (2) Algorithmic Authority. This one really scares me because it is invisible and can have a profound effect on things. We have a tendency to trust machines to be smarter than they are – especially if they can talk (see Eliza Effect). And when you start to increasingly rely on machines to parse information, what becomes incredibly valuable (and must therefore be protected) is not so much the hardware or the software “wrapper” but the Algorithm that’s making the choices (see Google as the classic example).

By itself, this doesn’t scare me – I’m a big fan of a good Algorithm (again, see Google).

What concerns me is that a Algorithms can make things look smart and that it’s always in the Algorithm writers best interest to keep their special sauce secret (again, see Google). A search engine is one thing. But, if we think about a Medical Diagnostic Algorithm, that’s where things get a bit more concerning. Especially because (as with Anti-Terrorism software as an example), there’s definitely a school of thought out there that we could use that type of technology to do initial diagnostics in Doctor’s offices (this btw, goes back at least as far as Eliza).

It took us a while to really get the problem with “friend”/”not-friend” (and how that flattened really complex social relations), trying to understand ideologies that are (unintentionally) encoded in black-box Algorythms – then things get scary (see horror stories about being misfiled on the “No Fly List”).

I’m not seeking to turn back the clock or stop progress. But I do think we need to get a lot smarter about our cultural blindspots and invisible forms of control (intentional or not).

So, again, I think you’re spot on about this at a high level being about control… But that understanding requires us to really understand all the different types of control that are in play.

Great analysis, kudos! Quotes from Erasmus and Licklider particularly seem to provide a wide and historical perspective in this article.

I translated this article in Korean and posted on my blog:

Hope you don’t mind but I am ready to take it down if you want me to.

Comments are closed.