Abraham Zapruder's Bell & Howell Zoomatic movi...
Abraham Zapruder’s Bell & Howell Zoomatic movie camera, in the collection of the US National Archives (Photo credit: Wikipedia)

The surveillance society has been emerging gradually — from photos of wars and disasters documented in still photos through the early 20th century, to newsreels, to the Zapruder film in 1963, capturing the assassination of President Kennedy. Shows like Candid Camera soon emerged as entertainment as television widened the availability of moving pictures tremendously and television cameras became so small and available that you could place “secret” ones. Ambient recordings became a centerpiece of an impeachment scandal in the US with Watergate.

More and smaller devices — both input and output devices — generated more information and less secrecy.

Fast-forward to the modern day, and we have phone video of Saddam Hussein’s last moments and a surreptitious video that hamstrung Mitt Romney’s presidential run, as well as journalists now regularly calling for (and receiving) eyewitness videos for events both big and small. We watch these on our tablets and phones. And then there’s this whole YouTube thing . . .

Throughout all this, the balance of privacy and transparency has proven elusive.

The pending launch of Google Glass promises to bring ambient video and audio recording into daily life in a new way, which will have a significant effect on businesses, personal interactions, public events, and private moments, as a smart post by Mark Hurst from the Creative Good blog points out:

The key experiential question of Google Glass isn’t what it’s like to wear them, it’s what it’s like to be around someone else who’s wearing them. . . . Your one-on-one conversation with someone wearing Google Glass is likely to be annoying, because you’ll suspect that you don’t have their undivided attention. And you can’t comfortably ask them to take the glasses off (especially when, inevitably, the device is integrated into prescription lenses). Finally – here’s where the problems really start – you don’t know if they’re taking a video of you. . . . Anywhere you go in public – any store, any sidewalk, any bus or subway – you’re liable to be recorded: audio and video.

Hurst mentions the natural counter-argument — surveillance cameras are all around us, so this isn’t much different. But it is different for some important reasons, he feels:

What makes Glass so unique is that it’s a Google project. And Google has the capacity to combine Glass with other technologies it owns. . . . add in facial recognition and the identity database that Google is building within Google Plus (with an emphasis on people’s accurate, real-world names). . . . consider the speech-to-text software that Google already employs, both in its servers and on the Glass devices themselves. Any audio in a video could, technically speaking, be converted to text, tagged to the individual who spoke it, and made fully searchable within Google’s search index.

As Hurst notes, Google has a lot of symbiotic technologies, which Glass would serve. In addition to the processing technologies he refers to, Google owns YouTube. Google has its search engine. Google has Android. Google owns Zagat. Google owns Frommers. Google has Google Maps, Google Earth, and Google Street View. You could become a franchise player in the right video — but would you get a franchise contract?

As Hurst puts it (and he should know, because he predicted this in his book from 2007, “Bit Literacy”):

The most important Google Glass experience is not the user experience – it’s the experience of everyone else. The experience of being a citizen, in public, is about to change.

Hurst strikes the tone of someone concerned about how this will change society. It’s a tone Julian Assange has been striking around his efforts through Wikileaks. He believes we should work toward a society with “privacy for the weak and transparency for the powerful.” Ambient surveillance doesn’t fit with this view.

Meanwhile, others are thinking the change may be for the better. These people, who feel that a transparent society is freer and better society, make their point in a recent article about cypherpunk culture in the Verge. David Brin, author of “The Transparent Society,” is quoted extensively:

[the ethic of privacy for the weak and transparency for the powerful is] already enshrined in law. A meek normal person can sue for invasion of privacy, a prominent person may not. But at a deeper level [Assange’s statement] is simply stupid. Any loophole in transparency ‘to protect the meek’ can far better be exploited by the mighty than by the meek. . . . The meek can never verify that their bought algorithm and service is working as promised, or isn’t a bought-out front for the NSA or a criminal gang. Above all, protecting the weak or meek with shadows and cutouts and privacy laws is like setting up Potemkin villages, designed to create surface illusions.

Adrian Lamo, who turned in Bradley Manning for his release of military files to Wikileaks, is quoted as saying:

Privacy is quite dead. That people still worship at its corpse doesn’t change that. In [the unreleased documentary] Hackers Wanted I gave out my SSN, and I’ve never had cause to regret that. Anyone could get it trivially. The biggest threat to our privacy is our own limited understanding of how little privacy we truly have.

Perhaps a leveling of the playing field is what we need. If very little is truly private — or is private only for a short time — isn’t it better to have multiple perspectives on events rather than only the authorities’ perspective? Unfortunately, the line between private information and government information is blurry. Last year, the US government’s requests for data about Google’s users increased 37%. Granted, the number of requests remains small, but precedents can expand into large programs once set.

To some, it seems that even as more information has become available, less freedom has become the norm. One of these people is Jacob Applebaum, an independent computer security expert affiliated occasionally with Wikileaks. He lists a number of examples of diminished freedoms, including indefinite detention under the National Defense Authorization Act of 2012, warrantless wiretaps, state-sanctioned drone strikes, state-sponsored malware, and the Patriot Act.

The Galgenhumor of our era revolves around things that most people simply thought impossible in our lifetime. It isn’t a great time to be a dissenting voice of any kind in our American empire. What we will remember is the absolute silence of so many, when the above things became normalized.

Yet, politics driven by terrorism often brings relatively brief (and regrettable) infringements on freedoms. There is a historical frame to these times, one we’ve seen before — and apparently haven’t learned from. At the same time, governments find themselves just as confused over privacy and information availability as anyone — from legitimately trying to protect data used in scientific research, medical records, and financial transactions to more questionable projects like the Patriot Act. Where is the line? It’s temporal and elusive. Even the proper definition of “privacy” often seems slippery. Do we really want terrorists or criminals to have as much privacy as the rest of us?

Where does privacy begin and end? How is it different from secrecy? And how do researchers and publishers navigate these waters in the future?

When you find the answer, tell it to someone wearing Google Glass. Maybe we’ll all see it shortly thereafter.

Enhanced by Zemanta
Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

9 Thoughts on "Welcome to the Surveillance Society of Google Glass — Should We Be Worried or Relieved?"

Kent this is a very interesting question.

One of the themes here at SXSW this week has been “the Internet of things” (sensors in our cars, refrigerators, etc.) as well as personal sensors (for medical use, for general consumer use) – the tie in being that data about us – us individually – is getting more and more ubiquitous. There are so many good things that can come from that (more awareness of health, convenience through automation/anticipation of needs) but there is also the “dark side.”

Where’s the line? At least with some data sources (fitbit, jawbone up, basis band, nike fuel, nest, etc) we opt in and can limit privacy, but when someone else produces data about you (Google Glass), what can you do? This isn’t new, it’s just getting more prevalent (forever our addresses, phone numbers, and email addresses were accessible by marketers; on FB I can limit my posts to family/friends for example, but one of them can repost what I said/did without limitation).

So it’s interesting to note that scientists were able to workout the mass and orbital parameters of the meteor that exploded (and impacted) over Russia due to the volume of timestamped video available. That material was avaialble because Russians like to have video evidence for when (not if, it seems) they end up in an accident on Russia’s scary roads. “The street finds uses for things, uses unintended by the manufacturer” (Gibson).

Here in the UK we’ve all had to deal with the very annoying (Just my opinion – cleaned up for public consumption) EU cookie directive (See here for some background: http://en.wikipedia.org/wiki/Directive_on_Privacy_and_Electronic_Communications) which requires ‘informed’ consent before a cookie can be placed on a users machine. Note this only has to happen if the relevent EU country has put the directive into law – we have, many others have not, yet. Annoying though it may be, at least there’s some indications of some thought over these issues. Mind you our Government is still pressing on with the Digital Economy Act and its innovative interpetation of due process for suspected file sharers (http://en.wikipedia.org/wiki/Digital_Economy_Act_2010).

Germany is really the EU country to watch. They seem to be taking a very restrictive approach to the use of data in public. Germany has banned Google Streetview for example.

Overall, the Eu are taking a variety of steps here. The draft Data Protection regulations (http://en.wikipedia.org/wiki/Data_Protection_Directive) are informative – especially as they can be compared to the state of affairs over in the US. Notably the right to be forgotten is part of the current data directive (which will be replaced by the above regulations). Speaking of which – there’s a comparison between the Eu and the US approach on Frobes – http://www.forbes.com/sites/michaelvenables/2013/03/08/the-ecs-right-to-be-forgotten-proposal-in-the-u-s/ How exactly one gets forgotten in a digital owlrd is an interesting point to ponder – perhaps our fears of bit rot need to be replaced by fears of incomplete data storage on us, leading to false assumptions about who we are and what our intentions might be, predicated on the mysterious fulminations of algorithms we can barely comprehend.

Ah Google and privacy, never the best of mixes. See this latest on Google storing all of your wireless passwords on their servers:
http://brooksreview.net/2013/02/adventures-in-privacy-google-edition-part-ii/

And I think the societal issues with a product like Google Glass go beyond just privacy. To me, they threaten to intensify one of the worst societal technology trends, the continued level of disconnect from the real world. Google’s own engineers put the problem this way:
http://www.theverge.com/2013/2/22/4013406/i-used-google-glass-its-the-future-with-monthly-updates

I went to the shuttle stop and I saw a line of not 10 people but 15 people standing in a row like this,” she puts her head down and mimics someone poking at a smartphone. “I don’t want to do that, you know? I don’t want to be that person.

Unfortunately, Google’s solution is to put that smartphone continuously and directly in front of your face. Rather than solving the disconnection problem, they’re solving a posture problem.

So beyond worrying about whether the “glasshole” you’re sitting with is filming you, you also never know if they’re actually engaged in conversation with you or checking their Facebook page. As John Gruber recently put it:
http://daringfireball.net/linked/2013/02/27/brin-glass

I can see the argument that dicking around with our phones in public is not cool, that we should pay more attention to our companions and surroundings, and less to our computer displays. Strapping a computer display to your face is not the answer.

Oh the irony of using a facebook property to comment on the privacy issues with Google Glass…

I don’t think Google has considered the hostility these things could generate. Maybe I’m projecting, but the very idea makes me mad. In special situations, maybe, or for certain occupations, but for recreational users walking down the street? Or worse, in bars or restaurants and other public places?? Once people know what they are, there will be reactions.

I couldn’t agree more. Anonymity is DEAD, and it’s not coming back. Reality and our online lives have merged.

Comments are closed.