I started Jaron Lanier’s recent book, “You Are Not a Gadget,” skeptical of his initial opinions. His complaints about standardization and today’s limiting philosophy of information technology didn’t impress me at first — after all, we had to embrace standards around electricity, electromagnetic spectrum deployment, and the like to have things like electric lights, radio, and television. Sure, these choices were limiting, but they ultimately liberated larger phenomena.
But as the book proceeded, either my skepticism eroded or Lanier’s arguments grew sharper, or both. In any event, I found myself thinking hard about the track record of the Web as a commercial and creative catalyst thus far, about the perceived vs. real virtues of “open,” about the ultimate cynicism of Facebook, and about my own participation in social media and electronic commerce, especially the centrality advertising is gaining in the culture developing around online identity.
How we’ve ceded some of our brains to the digital “hive culture” is a main fixation of Lanier’s writing, especially when we allow that digital things (computers, algorithms, cools mashups, collaborative works by anonymous contributors) might be “smarter” than we are as individuals:
People degrade themselves in order to make machines seem smart all the time. Before the crash, bankers believed in supposedly intelligent algorithms that could calculate credit risks before making bad loans. We ask teachers to teach to standardized tests so a student will look good to an algorithm. We have repeatedly demonstrated our species’ bottomless ability to lower our standards to make information technology look good. Every instance of intelligence in a machine is ambiguous.
That is, it’s unclear whether the intelligence resides in the machine, or is subtracted from the participant’s own stores in order to preserve the appearance and increasing belief that the computer/crowd/hive is smart.
Lanier’s most damning points revolve around what he calls “open culture” — the movement spurred by advocates of open access, open standards, open data, open, open, open. While it all sounds good, what it’s actually created is an amoral world in which consequences aren’t considered, the victims are blamed, technical solutions are thought to be better than common sense, creativity has been stifled, commerce is abandoned, and gee-whiz wonderment conceals deeply cynical plays by scheming companies. The Age of Systems isn’t an age Lanier wants to be a part of, and he makes a pretty convincing case as to why.
As I was writing this, a Web site representing these problems came to my attention. Called PleaseRobMe.com, the site uses updates from various geo-targeting programs and status tools to generate a list for would-be robbers of “recent empty homes” and “new opportunities.” Of course, the creators hide behind the hacker ethos of “we’re just trying to show you what could happen,” but what they’re also pointing out is how dismissive technologists are to the common sense of “security through obscurity” while also implicitly blaming the victims of social media services, yet contributing nothing creative to our culture.
The “security through obscurity” trope bears analysis. Anyone who has had this approach to security dismissed with a knowing smirk by technologists has experienced the dismissal of common sense and preference for technology. Lanier asks quite rightly, Is there any other kind of security other than obscurity? Once something is not obscure anymore, not hidden, it’s not secure. So, in an “open” world that demotes “obscurity” to the realm of silliness, we have private information put together by amoral technologists, and we experience situations in which our lives are at the mercy of someone else’s inadequate or highly skewed moral calculus.
Lanier has an even more horrifying example of this — two teams of researchers in 2008 revealed how they had spent two years figuring out how to use mobile phone technology to hack into a pacemaker and turn it off by remote control, in order to kill a person.
Let that sink in — two teams (University of Massachusetts and University of Washington) working two years to figure out how to use technology to kill a person.
Would anyone other than a digital worker been allowed to pursue, much less present at conferences, two years worth of research on how to kill a person in a supremely twisted way? And what did publishing the information accomplish? Openness, certainly, but obscurity might have been more appropriate — and more secure. Ultimately, “security through obscurity” is now the only bulwark against assassins with cell phones, because pacemaker IDs are hidden inside hospital systems and medical records. But if openness around the manufacturer’s identification numbers for pacemakers were to follow (as part of a healthcare IT initiative, perhaps), patients would need to be warned, perhaps hundreds of surgeries performed to replace vulnerable devices, etc. And with morbidity and mortality following, these digital researchers were doing what Lanier worries many of their ilk are best at now — creating fear, uncertainty, and doubt, while not helping our culture and remaining enamored of their own cleverness and moral rectitude.
Lanier also paints a commercial world that has divided the haves and have-nots more severely, emptying our culture of what he calls the “intellectual middle-class,” the journalists, musicians, authors, and composers whose careers are made as stringers, session players, genre authors, and journeymen songwriters. These truly creative people have been sidelined by algorithms (Last.fm, Pandora) and digital tools framed as open but really voiding commercial reality from the equation.
We’ve talked here before about how the world of scarcity has been replaced by the world of abundance, and how this abundance is allowing Web 2.0 and other digital phenomena to occur. Lanier argues that we need to restore artificial scarcity, since scarcity is what makes economies work. Only by doing so can information providers and others retain economic models that will restore creativity and make it sustainable. This means economic models based on ownership, property, access rights, and all the familiar things from a scarcity economy. Only then can the creative and indispensable intellectual middle-class again thrive.
The lack of creativity in Web 2.0 culture is something Lanier really takes on with a vengeance, and he has many good points, including:
Let’s suppose that, back in the 1980s, I had said, “In a quarter century, when the digital revolution has made great progress and computer chips are millions of times faster than they are now, humanity will finally win the prize of being able to write a new encyclopedia and a new version of UNIX!” It would have sounded utterly pathetic.
So we have Wikipedia and Linux from open culture, which have done little but drain the commercial viability from traditional encyclopedias and UNIX — emptying the world of a middle-class of programmers, writers, reference editors, and publishers.
Yet, Lanier points out, while open culture has done little more than recreate existing norms in a commercially vacuous manner, supposedly “closed” cultures have given us real innovation — the iPhone, DVRs, new medicines, hybrid automobiles, and the list goes on.
It’s very interesting to note how little originality open culture has generated for scholarly publishing. Instead of making ground-breaking information systems (those have emerged from closed systems and traditional economic models, by the way), the “open” initiatives have been technology-based recreations of existing forms, with little creativity, just echoes (or disjointed assemblages of echoes). While the blogosphere has been criticized as an echo chamber of voices, it might be that Web 2.0 overall is an echo chamber of ideas on a much larger scale, using technology to recreate familiar things while draining them of commercial viability and human creativity — truly empty echoes.
Creative Commons falls under Lanier’s microscope, and it doesn’t fare well. He views the labyrinth of agreements merely as an elaborate algorithm to keep artists and creative people remote from their audiences while embracing the culture of “open” and diminishing the commercial value of creative outputs. Given the debate over copyright at the recent PSP conference, it’s interesting to note that copyright law isn’t an algorithm, which is why it’s so “confusing” — sometimes, real people actually have to sort out what it means and reinterpret it. Maybe creativity is a human endeavor, not an algorithmic output!
Lanier’s observations fall in line with arguments I’ve made here before — that distribution and device providers are the major economic players in our era — but with a twist, namely that maybe those are the only things that have been delivered. Because we’ve been beguiled by “digital,” we’ve forgotten that humans, creativity, work, and good things should matter more than slick new distribution or presentation systems. Yet, these human endeavors are precisely the things not being rewarded in the flattened, open world.
Lanier makes a seriously chilling point when talking about how commercial value has been extracted from journalism and replaced by a distributed, technologically driven publishing approach:
Would the recent years of American history have been any different, any less disastrous, if the economic model of the newspaper had not been under assault? We had more bloggers, sure, but also fewer Woodwards and Bernsteins during a period in which ruinous economic and military decisions were made. . . . Instead of facing up to a tough press, the administration was made vaguely aware of mobs of noisily opposed bloggers nullifying one another. . . . The effect of the blogosphere overall was a wash, as is always the case for the type of flat open systems celebrated these days.
Facebook is viewed as a cynical play to drive advertising at people through the “social graph” — leading to another strange and worrisome social side-effect:
Ironically, advertising is now singled out as the only form of expression meriting genuine commercial protection in the new world to come. . . . [Ads] are to be made ever more contextual, and the content of the ad is absolutely sacrosanct. . . . The centrality of advertising to the new digital hive economy is absurd, and it is even more absurd that this isn’t more generally recognized. . . . If the crowd is so wise, it should be directing each person optimally in choices related to home finance, the whitening of yellow teeth, and the search for a lover. All that paid persuasion ought to be mooted.
Finally, Lanier talks about the extended neoteny our rich culture is allowing, and how this extended childhood is exerting an oddly conservative force on our culture — teens today don’t ever lose touch with friends, so don’t reinvent themselves as dramatically as people who could break cleanly from their pasts multiple times over their lifetimes; people live longer and remain productive longer, preserving their effects on our culture for multiple generations instead of just one; and Web 2.0 technologies and philosophies tend to capture and amplify the dark sides of childhood.
While this long post might seem to give a lot of Lanier’s book away, this is by no means the case (in fact, it would be antithetical to give his arguments all away for free). My copy of “You Are Not a Gadget” ended up expanded significantly as dog-eared pages added folds over memorable passages. This post doesn’t even begin to do the book justice. It’s worth reading in its entirety. It’s worth owning. It’s a keeper.