Jaron Lanier
Image by jdlasica via Flickr

I started Jaron Lanier’s recent book, “You Are Not a Gadget,” skeptical of his initial opinions. His complaints about standardization and today’s limiting philosophy of information technology didn’t impress me at first — after all, we had to embrace standards around electricity, electromagnetic spectrum deployment, and the like to have things like electric lights, radio, and television. Sure, these choices were limiting, but they ultimately liberated larger phenomena.

But as the book proceeded, either my skepticism eroded or Lanier’s arguments grew sharper, or both. In any event, I found myself thinking hard about the track record of the Web as a commercial and creative catalyst thus far, about the perceived vs. real virtues of “open,” about the ultimate cynicism of Facebook, and about my own participation in social media and electronic commerce, especially the centrality advertising is gaining in the culture developing around online identity.

How we’ve ceded some of our brains to the digital “hive culture” is a main fixation of Lanier’s writing, especially when we allow that digital things (computers, algorithms, cools mashups, collaborative works by anonymous contributors) might be “smarter” than we are as individuals:

People degrade themselves in order to make machines seem smart all the time. Before the crash, bankers believed in supposedly intelligent algorithms that could calculate credit risks before making bad loans. We ask teachers to teach to standardized tests so a student will look good to an algorithm. We have repeatedly demonstrated our species’ bottomless ability to lower our standards to make information technology look good. Every instance of intelligence in a machine is ambiguous.

That is, it’s unclear whether the intelligence resides in the machine, or is subtracted from the participant’s own stores in order to preserve the appearance and increasing belief that the computer/crowd/hive is smart.

Lanier’s most damning points revolve around what he calls “open culture” — the movement spurred by advocates of open access, open standards, open data, open, open, open. While it all sounds good, what it’s actually created is an amoral world in which consequences aren’t considered, the victims are blamed, technical solutions are thought to be better than common sense, creativity has been stifled, commerce is abandoned, and gee-whiz wonderment conceals deeply cynical plays by scheming companies. The Age of Systems isn’t an age Lanier wants to be a part of, and he makes a pretty convincing case as to why.

As I was writing this, a Web site representing these problems came to my attention. Called PleaseRobMe.com, the site uses updates from various geo-targeting programs and status tools to generate a list for would-be robbers of “recent empty homes” and “new opportunities.” Of course, the creators hide behind the hacker ethos of “we’re just trying to show you what could happen,” but what they’re also pointing out is how dismissive technologists are to the common sense of “security through obscurity” while also implicitly blaming the victims of social media services, yet contributing nothing creative to our culture.

The “security through obscurity” trope bears analysis. Anyone who has had this approach to security dismissed with a knowing smirk by technologists has experienced the dismissal of common sense and preference for technology. Lanier asks quite rightly, Is there any other kind of security other than obscurity? Once something is not obscure anymore, not hidden, it’s not secure. So, in an “open” world that demotes “obscurity” to the realm of silliness, we have private information put together by amoral technologists, and we experience situations in which our lives are at the mercy of someone else’s inadequate or highly skewed moral calculus.

Lanier has an even more horrifying example of this — two teams of researchers in 2008 revealed how they had spent two years figuring out how to use mobile phone technology to hack into a pacemaker and turn it off by remote control, in order to kill a person.

Let that sink in — two teams (University of Massachusetts and University of Washington) working two years to figure out how to use technology to kill a person.

Would anyone other than a digital worker been allowed to pursue, much less present at conferences, two years worth of research on how to kill a person in a supremely twisted way? And what did publishing the information accomplish? Openness, certainly, but obscurity might have been more appropriate — and more secure. Ultimately, “security through obscurity” is now the only bulwark against assassins with cell phones, because pacemaker IDs are hidden inside hospital systems and medical records. But if openness around the manufacturer’s identification numbers for pacemakers were to follow (as part of a healthcare IT initiative, perhaps), patients would need to be warned, perhaps hundreds of surgeries performed to replace vulnerable devices, etc. And with morbidity and mortality following, these digital researchers were doing what Lanier worries many of their ilk are best at now — creating fear, uncertainty, and doubt, while not helping our culture and remaining enamored of their own cleverness and moral rectitude.

Lanier also paints a commercial world that has divided the haves and have-nots more severely, emptying our culture of what he calls the “intellectual middle-class,” the journalists, musicians, authors, and composers whose careers are made as stringers, session players, genre authors, and journeymen songwriters. These truly creative people have been sidelined by algorithms (Last.fm, Pandora) and digital tools framed as open but really voiding commercial reality from the equation.

We’ve talked here before about how the world of scarcity has been replaced by the world of abundance, and how this abundance is allowing Web 2.0 and other digital phenomena to occur. Lanier argues that we need to restore artificial scarcity, since scarcity is what makes economies work. Only by doing so can information providers and others retain economic models that will restore creativity and make it sustainable. This means economic models based on ownership, property, access rights, and all the familiar things from a scarcity economy. Only then can the creative and indispensable intellectual middle-class again thrive.

The lack of creativity in Web 2.0 culture is something Lanier really takes on with a vengeance, and he has many good points, including:

Let’s suppose that, back in the 1980s, I had said, “In a quarter century, when the digital revolution has made great progress and computer chips are millions of times faster than they are now, humanity will finally win the prize of being able to write a new encyclopedia and a new version of UNIX!” It would have sounded utterly pathetic.

So we have Wikipedia and Linux from open culture, which have done little but drain the commercial viability from traditional encyclopedias and UNIX — emptying the world of a middle-class of programmers, writers, reference editors, and publishers.

Yet, Lanier points out, while open culture has done little more than recreate existing norms in a commercially vacuous manner, supposedly “closed” cultures have given us real innovation — the iPhone, DVRs, new medicines, hybrid automobiles, and the list goes on.

It’s very interesting to note how little originality open culture has generated for scholarly publishing. Instead of making ground-breaking information systems (those have emerged from closed systems and traditional economic models, by the way), the “open” initiatives have been technology-based recreations of existing forms, with little creativity, just echoes (or disjointed assemblages of echoes). While the blogosphere has been criticized as an echo chamber of voices, it might be that Web 2.0 overall is an echo chamber of ideas on a much larger scale, using technology to recreate familiar things while draining them of commercial viability and human creativity — truly empty echoes.

Creative Commons falls under Lanier’s microscope, and it doesn’t fare well. He views the labyrinth of agreements merely as an elaborate algorithm to keep artists and creative people remote from their audiences while embracing the culture of “open” and diminishing the commercial value of creative outputs. Given the debate over copyright at the recent PSP conference, it’s interesting to note that copyright law isn’t an algorithm, which is why it’s so “confusing” — sometimes, real people actually have to sort out what it means and reinterpret it. Maybe creativity is a human endeavor, not an algorithmic output!

Lanier’s observations fall in line with arguments I’ve made here before — that distribution and device providers are the major economic players in our era — but with a twist, namely that maybe those are the only things that have been delivered. Because we’ve been beguiled by “digital,” we’ve forgotten that humans, creativity, work, and good things should matter more than slick new distribution or presentation systems. Yet, these human endeavors are precisely the things not being rewarded in the flattened, open world.

Lanier makes a seriously chilling point when talking about how commercial value has been extracted from journalism and replaced by a distributed, technologically driven publishing approach:

Would the recent years of American history have been any different, any less disastrous, if the economic model of the newspaper had not been under assault? We had more bloggers, sure, but also fewer Woodwards and Bernsteins during a period in which ruinous economic and military decisions were made. . . . Instead of facing up to a tough press, the administration was made vaguely aware of mobs of noisily opposed bloggers nullifying one another. . . . The effect of the blogosphere overall was a wash, as is always the case for the type of flat open systems celebrated these days.

Facebook is viewed as a cynical play to drive advertising at people through the “social graph” — leading to another strange and worrisome social side-effect:

Ironically, advertising is now singled out as the only form of expression meriting genuine commercial protection in the new world to come. . . . [Ads] are to be made ever more contextual, and the content of the ad is absolutely sacrosanct. . . . The centrality of advertising to the new digital hive economy is absurd, and it is even more absurd that this isn’t more generally recognized. . . . If the crowd is so wise, it should be directing each person optimally in choices related to home finance, the whitening of yellow teeth, and the search for a lover. All that paid persuasion ought to be mooted.

Finally, Lanier talks about the extended neoteny our rich culture is allowing, and how this extended childhood is exerting an oddly conservative force on our culture — teens today don’t ever lose touch with friends, so don’t reinvent themselves as dramatically as people who could break cleanly from their pasts multiple times over their lifetimes; people live longer and remain productive longer, preserving their effects on our culture for multiple generations instead of just one; and Web 2.0 technologies and philosophies tend to capture and amplify the dark sides of childhood.

While this long post might seem to give a lot of Lanier’s book away, this is by no means the case (in fact, it would be antithetical to give his arguments all away for free). My copy of “You Are Not a Gadget” ended up expanded significantly as dog-eared pages added folds over memorable passages. This post doesn’t even begin to do the book justice. It’s worth reading in its entirety. It’s worth owning. It’s a keeper.

Reblog this post [with Zemanta]
Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

View All Posts by Kent Anderson

Discussion

42 Thoughts on "“You Are Not a Gadget” — Why Open Culture and Technocentric Philosophies Are Ruining Our Lives"

whew. between the fact that people really can’t just be “the journalists, musicians, authors, and composers whose careers are made as stringers, session players, genre authors, and journeymen songwriters” anyway, because they can’t afford health insurance, or its alternative, health care — and this. well. ??

I’m a bit uneasy with the point of view that “Once something is not obscure anymore, not hidden, it’s not secure.” A very simply analogy would be Fort Knox. Everyone knows that it is there, and that it contains a great deal of valuable gold, but it is an extremely secure place. Online security, both server side and user side, can be secure as well, but requires good planning at various different levels.

I do remember a conversation I had with a sys admin a while back. He told me that the most secure server is one that is powered off and completely unplugged. Obviously there is a need for balance, otherwise the technology becomes completely useless.

The lack of ingenuity in Web 2.0, I could not agree with more. Sounds like an interesting read. I think I’ll pick up a copy of the book.

However, those gold depositories you don’t know about are even more secure. You don’t have to resort to defensive measures with obscurity. Otherwise, it’s a game.

Kent, that is true, the obscure ones are more secure, at least until there is a leak and others find out about it. If you are not prepared for that leak, then you are really in trouble. So, to your point, you do still have to play the game, even if you are relying on obscurity.

But that’s only because we’re in a culture, especially digitally, in which a “leak” as you call it can be so damaging. Lanier wants us to think a little harder about what we’re doing, and not just drive off the digital cliff.

It’s a question of points of failure – if you’re relying on the fact that it’s secret to keep it safe, then a failure of that secrecy will be utterly devastating. Secrecy can be one part of a security plan, but it shouldn’t be the only part of it. Otherwise, when that fails – and keeping a secret is not an easy thing to do – then, game over.

I think this is too far a shift of meaning into “secret”. Lanier’s point is that some things are naturally obscure (in which cupboard I keep the peanut butter) while not being secret (corner cupboard to the right of the sink — not secret, but still obscure enough that I’m not worried about you stealing my two, count ’em, two jars of peanut butter). Technologists are making things non-obscure that people assume are obscure (e.g., where I am right now) and not thinking through the implications. Also, they assume things that are obscure aren’t secure, which is not logically true, especially for short periods of time. So they over-engineer things. In essence, they get it wrong both ways — revealing things they shouldn’t, securing things that don’t need it. Of course, users are also foolish for divulging their locations and all that jazz, but as you state, that’s a question of points of failure. If the technologists weren’t enabling all this as if it were all fine and dandy, people could behave as they always have.

And, sorry for the separate post, but to follow up – “security” doesn’t necessarily mean “technology.” In fact, relying only on technology for security is a pretty terrible idea as well. Good security is, fundamentally, about common sense – some of which may be technological, many of which should be about distribution of authority and responsibility, etc.

Kent, can you share more on what you meant by “the ultimate cynicism of Facebook”?

Cynicism in my book is exploiting the scruples of others by doing something heartlessly and pessimistically. That’s cynical. Facebook is exploiting the desire people have to connect in order to create advertising opportunities in the social graph. Their constant probing of the boundaries of privacy are evidence of this. I don’t think Zuckerberg really believes in the warm fuzzies of people connecting. I think he sees a business angle, and probably is a bit of a misanthrope (but I’m just reading between the lines of his public statements and general disposition). In any event, it’s pretty cynical.

While I think that Lanier makes a lot of sense in general, I can’t say I agree with the whole concept of “security through obscurity” being better than thorough scrutiny. I’m all for his concept of smaller closed working groups but once those things are released, there is a great power in being able to confirm that they are what they say they are, and in many eyes examining them for flaws. While it isn’t ideal for every situation, there are places where and “open” approach is superior to a closed one.

Science is based on this principle. If you publish a paper, you have to openly state how you did what you did and make reagents available so others can doublecheck and verify your work. This is even more vital for medical procedures and devices where the stakes of failure are higher. Yes, I think it’s good that graduate students spent time trying to find flaws in pacemakers. I’d rather they found the flaws and responsibly reported them to the manufacturer so they could be fixed before they became public than having them found by someone less scrupulous who might try to exploit them. They’ve done a service to patients by making the devices more secure. That’s been the big controversy with electronic voting machines. If we, the public, can’t examine the code they run, how do we know they haven’t been compromised?

Another example would be comparing the security record of programs like Windows or Microsoft’s server programs to things like Unix, Linux or Apache. Obscurity has certainly not provided MS with any security benefits.

As for Fort Knox, a better analogy would be comparing Fort Knox, which we all know about, but has been hardened through years of attempted robberies by Auric Goldfinger and is now well-secured with guards and alarms systems to a secret stash of gold hidden in an unknown location but left unguarded. The security is provided through obscurity but you’d better hope someone honest stumbles across it before a thief does.

The mention of PleaseRobMe here seems contradictory though–PleaseRobMe makes an incredibly valuable point about the dangers of geolocation services and suggest we not be so open, which is more in line with Lanier’s thinking. This article makes their point clear–the author sees a woman take a picture in a park on an iPhone. By location and time, he’s able to find the photo on Flickr, then see the rest of the woman’s photo collection. This makes it obvious where she lives, and in many of the photos, he can see the contents of her home. New services like those targeted by PleaseRobMe would let him know her exact location if there were anything worth stealing in her home. His wife points out that he regularly posts his upcoming trips and meetings he’s going to attend on his blog, letting the world know when she and their young child will be at home alone. You can call it “blaming the victim” but it’s important that people actually think about what they’re doing by being so casual and open with their personal information.

Two responses — Lanier’s point isn’t that security through obscurity isn’t the only source of security, just that it’s ultimately superior. Proponents of the “open” thinking believe that obscurity is just wrong or primitive or something. And while scientific publishing is about releasing reports of results, that’s far from “open” in the sense Lanier’s talking about. That would probably be cameras in the labs, Twitter reports of data from machines, etc. That level of openness isn’t helpful. I’d argue that scientific reports are more “closed” than “open” in many senses. That’s why they’re so polished and constrained and focused. They aren’t laying it all out there. I don’t think a paper with sentences like, “Then Steve broke the gauge and Maggie decided to take the reading she saw just before the glass cracked, and that’s why the value of 17.437 was recorded. It kind of fit with the other data, so we didn’t think it was a big deal” would ever be submitted, despite being quite “open” about their procedures.

The PleaseRobMe.com example isn’t contradictory but confirmatory, for exactly the reasons you elaborate. “Open” is dangerous. Lanier’s point is that hackers hide behind this “let me show you how dangerous the world is” excuse when they create these things, but if they were locksmiths breaking into people’s cars and leaving notes on their seats reading “You need a better lock,” we’d all freak out. Lanier’s point is that digital people get a double-standard because we’re all intoxicated with “digital,” but if we apply normal standards to their behavior, it’s pretty indefensible.

I think the analogy between hackers and
The problem with „Security by obscurity” is that you can still feel secure long after all the security is gone. It’s based on the idea that you can keep the security flaws secret. If you don’t assume that the solutions someone builds is 100% flawless how do you know that they are actually secret? If you remove “hackers” there would be only two groups of people trying to find those holes – the people who created it (and for them it’s an additional cost of research and possible need to replace all the products, with no social pressure as no one knows about the flaws) and people who actually want to use the flaws to “rob you home”, and would obviously not make them public either. The naivety of this idea is based on the assumption that the second kind of people don’t exist as we are perfect at keeping secrets, which is hardly a rational thing to believe as the open culture finds out those secret flaws.

It’s funny to see the call for stronger investigative reporters along with the call to keep the people who blow the whistle on a public issue silent. “PleaseRobMe” found a problem and created a message which got attention, and no, they didn’t go around houses breaking in and leaving notes. They didn’t even use any kind of sophisticated technology on that site – you can just search Twitter and get that info, their sole addition is putting it in the context of house robbery. If it advocates anything, it is common sense – don’t broadcast where you are to the whole world. I really see no place where “open” in the sense you described is used, I’ve been using some open-source systems for a while, and this looks like a nominal definition without any denotation which is only used to argue for “closed” solutions.

There are some things to think harder about with “open”, at least according to Lanier. First, there is little innovation. Open systems seem to mimic the innovations achieved by work done in “closed” systems. So, that’s why PleaseRobMe.com didn’t use “any kind of sophisticated technology”. Second, they did the digital equivalent of “breaking in and leaving notes”, so I disagree with you there. They could have kept walking by the cars, so to speak, but they had to concatenate the information and make it public, which is analogous in the way I describe. In normal society, their behavior would be reviled. Why is it acceptable in the digital realm? Are there really other standards for human behavior once we use computers?

Yes there are, just as there are different standards for human behavior on a plane and in a park.
I see a problem with common sense in the digital world – people take real live experience and concepts and create “digital equivalents” without checking if they do translate. For me this is ordinary people “leaving notes” saying “I won’t be around my car for 2 hours”, making them public and adding them to the twitter stream for ease of anonymous search. RobMePlease just annotates this notes with “You’re dumb to do this”. You would need to explain where the “breaking in” takes place.

If security through obscurity is “ultimately superior”, then why hasn’t it actually worked in the real world? Open software is vastly more secure than closed software. How many security issues are there for operating systems based on Linux or Unix compared to a closed system like Windows? Why is the open source Apache webserver so much more secure than closed source webservers? Yes, there is arrogance in the “open” communities (just as there is arrogance in closed communities–ever hang out with an Apple fanboy?), but that doesn’t mean they’re wrong.

And while I’m not an advocate for doing open science, I do think that once a result is released into the world, it should be treated as open (most journals require this in their terms for publication). Studies have to be examined and picked apart to know if they’re valid. Should we just accept Smith Glaxo Kline’s word when they tell us they’ve done a study showing their new drug is effective and safe? Or should we demand to see the data?

I say PleaseRobMe is contradictory in that their message is ultimately the same as Lanier’s–openness can be a bad and dangerous thing. They’re pointing out the same thing as Lanier, using the open crowd’s own tools to make them aware of the danger of their behavior. Yes, it is a bit flashy and set up to garner attention, but that’s their point, to warn people and get them to change their behavior. It’s hard to do that without offering any actual proof.

Part of the reason Linux is secure is because it mimics UNIX, which was not developed as an open system. As far as Apache, it’s based on UNIX and Linux, so the same story there, standing on the shoulders of closed-system giants AT&T and Bell Labs. What the “open” advocates often forget is that they’re just taking the best of what private industry did.

And Windows was based on DOS, also not an open source system, so what.

The question isn’t one of innovation–you, Lanier and I all agree on the superiority of closed systems there. The question is whether the openness of Unix-based systems provides better security. Real world results would indicate that this is a fact.

And for what it’s worth, the idea of PleaseRobMe isn’t new, in fact it predates the internet. These articles do the exact same thing, publicly airing the ideas that your answering machine will tell people you’re not home, and that funeral notices do so as well. Do you think these newspapers should have been “reviled” and that their actions were “indefensible” and “dangerous”? Perhaps it shows us that people like attention, regardless of the medium they’re using.

Very interesting post. I am sorry I did not read Lanier’s book yet so my personal opinions are purely based on your post.

“Lanier’s most damning points revolve around what he calls “open culture” — the movement spurred by advocates of open access, open standards, open data, open, open, open. While it all sounds good, what it’s actually created is an amoral world in which consequences aren’t considered, the victims are blamed, technical solutions are thought to be better than common sense, creativity has been stifled, commerce is abandoned, and gee-whiz wonderment conceals deeply cynical plays by scheming companies.”

It seems that Lanier makes a causal link between openness (of data and everything) and amorality of the world around us. Excuse me. Was this world more “moral” 20 years ago? No. But then, people were forever linking technological progress (and the mere passage of time) with moral and overall decline, so no surprises here.

“So we have Wikipedia and Linux from open culture, which have done little but drain the commercial viability from traditional encyclopedias and UNIX — emptying the world of a middle-class of programmers, writers, reference editors, and publishers.”

I am not sure that most programmers will agree with that. For goodness sake, we have much more programmers than it is actually needed, so the talk of “emptying the world of programmers” is bizarre.

“Lanier also paints a commercial world that has divided the haves and have-nots more severely, emptying our culture of what he calls the “intellectual middle-class,” the journalists, musicians, authors, and composers whose careers are made as stringers, session players, genre authors, and journeymen songwriters. These truly creative people have been sidelined by algorithms (Last.fm, Pandora) and digital tools framed as open but really voiding commercial reality from the equation.”

This statement does not make any sense. Last.fm and Pandora are internet radios. People go there to listen. How can the internet radio substitute for real musicians and songwriters? By making the music available worldwide, these and other tools made a lot of great, talented musicians actually *known*. Known because of their music, not because of their publisher’s promotion. That’s why lot of musicians do embrace web. True, the music publishers do not like it, but so what, they are not content creators. Personally, I prefer to buy music directly from a musician.

“Lanier argues that we need to restore artificial scarcity, since scarcity is what makes economies work.”

Should we embrace hunger and depletion of earth’s resources too because scarcity is good for economy?

Pretty thought-provoking, isn’t it? Here’s a little more food for thought.

The morality vacuum Lanier argues is due to the infatuation with technology — so, who cares if we make an application that helps people dodge police radar detectors, even though that’s arguably illegal on at least two fronts? Who cares if we expose your geo-location or activate your Web cam without your permission? On the one side, you have technologists who don’t consider the consequences. On the other side, you have a market that isn’t holding digital to the same standards as everything else. It’s not that much different from other situations in the past, but it’s our situation today, so we’re the ones who have to wrestle with it. You can’t just blow it off because there are analogs in the past.

As to programmers, we have more programmers partly because everyone thought computer science was going to be such a great career move. But what are they doing that’s innovative in the “open” space? More Linux isn’t people being paid for innovative programming. Look at how much more innovative programming has been unleashed thanks to the “closed” iPhone application store. More programming that makes MONEY for people AND unleashes innovation. Lanier’s point is that open systems are less innovative than we think, if they’re innovative at all, and they don’t compare with closed systems like PS2, iPhone, Wii, or others for commercial programming that actually provides innovation and livelihoods.

As to music, I took out a long passage of the review to shorten it, so that might be a problem here, but Lanier’s point is that MIDI and the codification of music has led to a lack of innovation, basically leading to the first decades in which no new musical genre has appeared. Contrast that to the “closed” and non-digitally codified 1980s where ska, punk, rap, metal, and hip-hop all emerged. Musicians are playing musical styles, genres, etc., that we’ve all heard before. You’re arguing Pandora and Last.fm have improved distribution, and that’s true. In fact, that may be ALL they’ve done. And that’s part of the point of Lanier’s critique. In addition, while distribution may have improved, commercialization has not.

Your last statement is a rhetorical Hail Mary. Lanier’s point is that natural scarcity no longer exists for information goods, but economies depend on scarcity, so the information economy needs artificial scarcity to survive and thrive. Food is a poor analog because it has natural scarcity in too many places — except in America, where you might argue that artificial scarcity (soda taxes, for example) might be a good idea, too.

While your point is accurate (the 80’s were a fecund era for musical innovation), the pedant in me can’t help but point out that your timeline is way off. Ska dates back to the early 1960’s (the late 70’s saw the first big UK revival), punk was a product of the mid to late 1970’s, metal emerged in the late 1960’s, early 1970’s. Hip hop/rap goes back to the late 1970’s as well.

But most of these genres progressed and became something new in the 1980’s, with punk turning into new wave, which gave birth to synth-pop. Metal went glam, leading to hair metal. And rap became an overpowering cultural force. Of course all of this happened in a digital era as CD’s became commercially available in 1982.

Yep, sorry, showing my awareness zone. Historically, your dates are correct of course, but my commercial memory puts them in the late ’70s and into the mid-’80s for mainstream awareness. I was probably more sheltered than you East Coast kids, anyhow.

One of Lanier’s points in the book is that new musical genres emerged every decade in the 20th century, but nothing that isn’t retro has emerged in the last few decades. I found myself agreeing quite a bit (Duffy is just a throwback singer, for instance, and Green Day, while excellent, is still of genres I know).

A lot in this argument is based on a premise that economical growth is good no matter what. Hence the idea that whatever is good for economy is good. Which is one point of view, but not the only one.

The equally valid point of view is that what is good for human welfare is good.

“Your last statement is a rhetorical Hail Mary. Lanier’s point is that natural scarcity no longer exists for information goods, but economies depend on scarcity, so the information economy needs artificial scarcity to survive and thrive.”

Sorry I don’t really understand the Hail Mary bit, but the idea of locking the information (especially the information that is vital for research) to create artificial scarcity seems perverse to me. That “natural scarcity no longer exists for information goods” is highly debatable too.

For a rather different approach, check this article by Rafael Sidi (open data drives innovation). Note this is a businessman’s point of view, not some left-wing fighter of a data liberation front like myself.

Finally: “Musicians are playing musical styles, genres, etc., that we’ve all heard before.” I think that has nothing to do with web / openness discussion at all. People were whining about this as long as recorded music exists. “There’s nothing new in music since ’80s / ’70s / ’50s / Bach…” They were also saying that jazz is dead since 1920s (and the word “jazz” appeared in 1910s).

(Something went wrong here: I posted a comment which never appeared here. Sorry in advance if it eventually will.)

A lot in this argument is based on a premise that economical growth is good no matter what. Hence the idea that whatever is good for economy is good. Which is one point of view, but not the only one.

The equally valid point of view is that what is good for human welfare is good.

“Your last statement is a rhetorical Hail Mary. Lanier’s point is that natural scarcity no longer exists for information goods, but economies depend on scarcity, so the information economy needs artificial scarcity to survive and thrive.”

Sorry I don’t really understand the Hail Mary bit, but the idea of locking the information (especially the information that is vital for research) to create artificial scarcity seems indefensible to me. That “natural scarcity no longer exists for information goods” is highly debatable too.

For a rather different approach, check http://www.researchinformation.info/features/feature.php?feature_id=255 (open data drives innovation). Note this is a businessman’s point of view, not some left-wing fighter of a data liberation front like myself.

Finally: “Musicians are playing musical styles, genres, etc., that we’ve all heard before.” I think that has nothing to do with web / openness discussion at all. People were complaining about this as long as recorded music exists. “There’s nothing new in music since ’80s / ’70s / ’50s / Bach…” They were also saying that jazz is dead since 1920s (and the word “jazz” appeared in 1910s).

Sorry, the spam filter caught the first one, for no apparent reason.

Lanier’s point is that when there’s little money in doing something, large swaths of practitioners can be disenfranchised and crucial contributors drop out. I think you’re taking an extreme reading of this. His point is that the middle-class of some key professions is being diminished and the remaining money is going to the either side of the divide.

I thought comparing information to food was an act of rhetorical hopeful desperation (in the US, we call these American football plays a “Hail Mary” since it’s a bit of a prayer). The businessman from Elsevier might want to read our post about open data and its costs before declaring open as a great business opportunity. Doing things right costs money. You might also want to read my post from the PSP about this.

Lanier’s points on music were a bit dubious, but I found myself more worried, especially given the lack of instrumental virtuosity in most popular music. Something has changed, I think. Music is way over-produced these days.

“Doing things right costs money.”

True.

Also, not doing things right costs money. The difference is when the money will be spent. I am glad that for once (some) governments are doing the wise thing and invest in open access.

As I said, I did not read Lanier’s book. In fact I am a bit intrigued why he is so negative about open access. But not to degree to actually pay for his book.

Try the free articles linked in the first comment after the article. They’ll give you something of a feel for his philosophy.

David – I did read the “Digital Maoism”: thanks for posting those links. If his book is in the same style, the chances are I am not going to bother, I think I got his (not very complicated) message.

Well, the Maoism piece was his first public salvo, from 2006. His thinking has evolved somewhat since then, though the themes are similar. The WSJ piece was written in conjunction with his current book.

Comments are closed.